2023-07-12 19:16:58,956 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9 2023-07-12 19:16:58,973 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-12 19:16:58,993 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 19:16:58,993 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1, deleteOnExit=true 2023-07-12 19:16:58,994 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 19:16:58,994 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/test.cache.data in system properties and HBase conf 2023-07-12 19:16:58,995 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 19:16:58,995 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir in system properties and HBase conf 2023-07-12 19:16:58,996 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 19:16:58,996 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 19:16:58,996 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 19:16:59,140 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-12 19:16:59,611 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 19:16:59,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 19:16:59,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 19:16:59,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 19:16:59,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 19:16:59,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 19:16:59,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 19:16:59,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 19:16:59,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 19:16:59,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 19:16:59,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/nfs.dump.dir in system properties and HBase conf 2023-07-12 19:16:59,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/java.io.tmpdir in system properties and HBase conf 2023-07-12 19:16:59,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 19:16:59,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 19:16:59,622 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 19:17:00,121 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 19:17:00,124 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 19:17:00,407 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 19:17:00,612 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-12 19:17:00,633 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:00,684 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:00,731 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/java.io.tmpdir/Jetty_localhost_localdomain_42793_hdfs____7lftil/webapp 2023-07-12 19:17:00,869 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42793 2023-07-12 19:17:00,906 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 19:17:00,906 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 19:17:01,383 WARN [Listener at localhost.localdomain/43233] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:01,488 WARN [Listener at localhost.localdomain/43233] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:01,519 WARN [Listener at localhost.localdomain/43233] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:01,529 INFO [Listener at localhost.localdomain/43233] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:01,552 INFO [Listener at localhost.localdomain/43233] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/java.io.tmpdir/Jetty_localhost_32769_datanode____.ggyuji/webapp 2023-07-12 19:17:01,671 INFO [Listener at localhost.localdomain/43233] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32769 2023-07-12 19:17:02,148 WARN [Listener at localhost.localdomain/38847] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:02,218 WARN [Listener at localhost.localdomain/38847] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:02,220 WARN [Listener at localhost.localdomain/38847] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:02,222 INFO [Listener at localhost.localdomain/38847] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:02,231 INFO [Listener at localhost.localdomain/38847] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/java.io.tmpdir/Jetty_localhost_46537_datanode____s3a1a6/webapp 2023-07-12 19:17:02,349 INFO [Listener at localhost.localdomain/38847] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46537 2023-07-12 19:17:02,361 WARN [Listener at localhost.localdomain/39341] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:02,439 WARN [Listener at localhost.localdomain/39341] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:02,445 WARN [Listener at localhost.localdomain/39341] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:02,447 INFO [Listener at localhost.localdomain/39341] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:02,457 INFO [Listener at localhost.localdomain/39341] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/java.io.tmpdir/Jetty_localhost_40933_datanode____.f7c5z0/webapp 2023-07-12 19:17:02,567 INFO [Listener at localhost.localdomain/39341] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40933 2023-07-12 19:17:02,599 WARN [Listener at localhost.localdomain/34239] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:02,992 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf8a3da0fdd27079d: Processing first storage report for DS-486a4b59-ff70-4f76-965f-28d3762f2281 from datanode d9d04309-0b99-409e-9f32-2d8a3498b1b1 2023-07-12 19:17:02,994 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf8a3da0fdd27079d: from storage DS-486a4b59-ff70-4f76-965f-28d3762f2281 node DatanodeRegistration(127.0.0.1:42393, datanodeUuid=d9d04309-0b99-409e-9f32-2d8a3498b1b1, infoPort=42667, infoSecurePort=0, ipcPort=34239, storageInfo=lv=-57;cid=testClusterID;nsid=1629726421;c=1689189420190), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-12 19:17:02,994 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf8a3da0fdd27079d: Processing first storage report for DS-d41e7f8b-0dc4-475d-8c44-67f76a496834 from datanode d9d04309-0b99-409e-9f32-2d8a3498b1b1 2023-07-12 19:17:02,995 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf8a3da0fdd27079d: from storage DS-d41e7f8b-0dc4-475d-8c44-67f76a496834 node DatanodeRegistration(127.0.0.1:42393, datanodeUuid=d9d04309-0b99-409e-9f32-2d8a3498b1b1, infoPort=42667, infoSecurePort=0, ipcPort=34239, storageInfo=lv=-57;cid=testClusterID;nsid=1629726421;c=1689189420190), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 19:17:02,997 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc18a92b91d88380e: Processing first storage report for DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77 from datanode 8f687cea-8c39-4290-9745-8ce95d46083e 2023-07-12 19:17:02,997 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc18a92b91d88380e: from storage DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77 node DatanodeRegistration(127.0.0.1:35389, datanodeUuid=8f687cea-8c39-4290-9745-8ce95d46083e, infoPort=41535, infoSecurePort=0, ipcPort=38847, storageInfo=lv=-57;cid=testClusterID;nsid=1629726421;c=1689189420190), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:02,998 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4362509d94a25e63: Processing first storage report for DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d from datanode f2bdddcf-6c44-494a-8b6c-0f3758698d6d 2023-07-12 19:17:02,998 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4362509d94a25e63: from storage DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d node DatanodeRegistration(127.0.0.1:36015, datanodeUuid=f2bdddcf-6c44-494a-8b6c-0f3758698d6d, infoPort=44911, infoSecurePort=0, ipcPort=39341, storageInfo=lv=-57;cid=testClusterID;nsid=1629726421;c=1689189420190), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:02,998 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc18a92b91d88380e: Processing first storage report for DS-5eec4685-d73a-42d6-ac4a-10a737b6220c from datanode 8f687cea-8c39-4290-9745-8ce95d46083e 2023-07-12 19:17:02,999 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc18a92b91d88380e: from storage DS-5eec4685-d73a-42d6-ac4a-10a737b6220c node DatanodeRegistration(127.0.0.1:35389, datanodeUuid=8f687cea-8c39-4290-9745-8ce95d46083e, infoPort=41535, infoSecurePort=0, ipcPort=38847, storageInfo=lv=-57;cid=testClusterID;nsid=1629726421;c=1689189420190), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-12 19:17:02,999 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4362509d94a25e63: Processing first storage report for DS-35e5d520-f9a9-497a-9948-c0577e33fdde from datanode f2bdddcf-6c44-494a-8b6c-0f3758698d6d 2023-07-12 19:17:02,999 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4362509d94a25e63: from storage DS-35e5d520-f9a9-497a-9948-c0577e33fdde node DatanodeRegistration(127.0.0.1:36015, datanodeUuid=f2bdddcf-6c44-494a-8b6c-0f3758698d6d, infoPort=44911, infoSecurePort=0, ipcPort=39341, storageInfo=lv=-57;cid=testClusterID;nsid=1629726421;c=1689189420190), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:03,199 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9 2023-07-12 19:17:03,281 INFO [Listener at localhost.localdomain/34239] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/zookeeper_0, clientPort=52922, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 19:17:03,298 INFO [Listener at localhost.localdomain/34239] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52922 2023-07-12 19:17:03,307 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:03,310 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:04,042 INFO [Listener at localhost.localdomain/34239] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0 with version=8 2023-07-12 19:17:04,042 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/hbase-staging 2023-07-12 19:17:04,052 DEBUG [Listener at localhost.localdomain/34239] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 19:17:04,052 DEBUG [Listener at localhost.localdomain/34239] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 19:17:04,053 DEBUG [Listener at localhost.localdomain/34239] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 19:17:04,053 DEBUG [Listener at localhost.localdomain/34239] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 19:17:04,559 INFO [Listener at localhost.localdomain/34239] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-12 19:17:05,284 INFO [Listener at localhost.localdomain/34239] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:05,342 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:05,343 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:05,343 INFO [Listener at localhost.localdomain/34239] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:05,344 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:05,344 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:05,536 INFO [Listener at localhost.localdomain/34239] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:05,661 DEBUG [Listener at localhost.localdomain/34239] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-12 19:17:05,769 INFO [Listener at localhost.localdomain/34239] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33033 2023-07-12 19:17:05,786 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:05,789 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:05,816 INFO [Listener at localhost.localdomain/34239] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33033 connecting to ZooKeeper ensemble=127.0.0.1:52922 2023-07-12 19:17:05,873 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:330330x0, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:05,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33033-0x100829d951f0000 connected 2023-07-12 19:17:05,938 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:05,939 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:05,944 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:05,958 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33033 2023-07-12 19:17:05,958 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33033 2023-07-12 19:17:05,961 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33033 2023-07-12 19:17:05,962 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33033 2023-07-12 19:17:05,963 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33033 2023-07-12 19:17:06,003 INFO [Listener at localhost.localdomain/34239] log.Log(170): Logging initialized @7976ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-12 19:17:06,147 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:06,148 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:06,149 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:06,151 INFO [Listener at localhost.localdomain/34239] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 19:17:06,152 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:06,152 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:06,156 INFO [Listener at localhost.localdomain/34239] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:06,221 INFO [Listener at localhost.localdomain/34239] http.HttpServer(1146): Jetty bound to port 42575 2023-07-12 19:17:06,224 INFO [Listener at localhost.localdomain/34239] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:06,270 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,275 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@12d9dc59{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:06,276 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,276 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38ff9bc9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:06,343 INFO [Listener at localhost.localdomain/34239] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:06,359 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:06,360 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:06,362 INFO [Listener at localhost.localdomain/34239] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:06,370 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,397 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@427e7903{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 19:17:06,411 INFO [Listener at localhost.localdomain/34239] server.AbstractConnector(333): Started ServerConnector@7766b5d1{HTTP/1.1, (http/1.1)}{0.0.0.0:42575} 2023-07-12 19:17:06,411 INFO [Listener at localhost.localdomain/34239] server.Server(415): Started @8384ms 2023-07-12 19:17:06,415 INFO [Listener at localhost.localdomain/34239] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0, hbase.cluster.distributed=false 2023-07-12 19:17:06,502 INFO [Listener at localhost.localdomain/34239] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:06,502 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,503 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,503 INFO [Listener at localhost.localdomain/34239] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:06,503 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,503 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:06,513 INFO [Listener at localhost.localdomain/34239] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:06,518 INFO [Listener at localhost.localdomain/34239] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39963 2023-07-12 19:17:06,521 INFO [Listener at localhost.localdomain/34239] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:06,533 DEBUG [Listener at localhost.localdomain/34239] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:06,535 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:06,538 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:06,541 INFO [Listener at localhost.localdomain/34239] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39963 connecting to ZooKeeper ensemble=127.0.0.1:52922 2023-07-12 19:17:06,549 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:399630x0, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:06,550 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:399630x0, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:06,555 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:399630x0, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:06,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39963-0x100829d951f0001 connected 2023-07-12 19:17:06,557 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:06,566 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39963 2023-07-12 19:17:06,566 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39963 2023-07-12 19:17:06,570 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39963 2023-07-12 19:17:06,574 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39963 2023-07-12 19:17:06,575 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39963 2023-07-12 19:17:06,579 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:06,579 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:06,579 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:06,581 INFO [Listener at localhost.localdomain/34239] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:06,581 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:06,581 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:06,581 INFO [Listener at localhost.localdomain/34239] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:06,583 INFO [Listener at localhost.localdomain/34239] http.HttpServer(1146): Jetty bound to port 45717 2023-07-12 19:17:06,583 INFO [Listener at localhost.localdomain/34239] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:06,599 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,599 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@146a5e1f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:06,600 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,600 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@77d4261c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:06,617 INFO [Listener at localhost.localdomain/34239] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:06,619 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:06,619 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:06,619 INFO [Listener at localhost.localdomain/34239] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:06,622 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,626 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@50272e17{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:06,628 INFO [Listener at localhost.localdomain/34239] server.AbstractConnector(333): Started ServerConnector@6fa562a1{HTTP/1.1, (http/1.1)}{0.0.0.0:45717} 2023-07-12 19:17:06,628 INFO [Listener at localhost.localdomain/34239] server.Server(415): Started @8600ms 2023-07-12 19:17:06,642 INFO [Listener at localhost.localdomain/34239] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:06,643 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,643 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,643 INFO [Listener at localhost.localdomain/34239] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:06,644 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,644 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:06,644 INFO [Listener at localhost.localdomain/34239] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:06,647 INFO [Listener at localhost.localdomain/34239] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43021 2023-07-12 19:17:06,647 INFO [Listener at localhost.localdomain/34239] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:06,649 DEBUG [Listener at localhost.localdomain/34239] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:06,650 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:06,651 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:06,653 INFO [Listener at localhost.localdomain/34239] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43021 connecting to ZooKeeper ensemble=127.0.0.1:52922 2023-07-12 19:17:06,658 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:430210x0, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:06,660 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:430210x0, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:06,660 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43021-0x100829d951f0002 connected 2023-07-12 19:17:06,661 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:06,662 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:06,663 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43021 2023-07-12 19:17:06,667 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43021 2023-07-12 19:17:06,667 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43021 2023-07-12 19:17:06,668 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43021 2023-07-12 19:17:06,669 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43021 2023-07-12 19:17:06,672 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:06,673 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:06,673 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:06,674 INFO [Listener at localhost.localdomain/34239] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:06,674 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:06,674 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:06,675 INFO [Listener at localhost.localdomain/34239] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:06,675 INFO [Listener at localhost.localdomain/34239] http.HttpServer(1146): Jetty bound to port 35319 2023-07-12 19:17:06,676 INFO [Listener at localhost.localdomain/34239] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:06,697 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,697 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@93eeee4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:06,698 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,698 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50be8b8f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:06,710 INFO [Listener at localhost.localdomain/34239] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:06,710 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:06,711 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:06,711 INFO [Listener at localhost.localdomain/34239] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:06,712 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,714 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@76231043{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:06,715 INFO [Listener at localhost.localdomain/34239] server.AbstractConnector(333): Started ServerConnector@442ee3d1{HTTP/1.1, (http/1.1)}{0.0.0.0:35319} 2023-07-12 19:17:06,715 INFO [Listener at localhost.localdomain/34239] server.Server(415): Started @8688ms 2023-07-12 19:17:06,727 INFO [Listener at localhost.localdomain/34239] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:06,728 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,728 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,728 INFO [Listener at localhost.localdomain/34239] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:06,729 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:06,729 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:06,729 INFO [Listener at localhost.localdomain/34239] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:06,732 INFO [Listener at localhost.localdomain/34239] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36571 2023-07-12 19:17:06,732 INFO [Listener at localhost.localdomain/34239] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:06,735 DEBUG [Listener at localhost.localdomain/34239] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:06,737 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:06,739 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:06,742 INFO [Listener at localhost.localdomain/34239] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36571 connecting to ZooKeeper ensemble=127.0.0.1:52922 2023-07-12 19:17:06,748 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:365710x0, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:06,750 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36571-0x100829d951f0003 connected 2023-07-12 19:17:06,750 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:06,752 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:06,753 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:06,755 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36571 2023-07-12 19:17:06,756 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36571 2023-07-12 19:17:06,757 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36571 2023-07-12 19:17:06,761 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36571 2023-07-12 19:17:06,761 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36571 2023-07-12 19:17:06,765 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:06,765 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:06,765 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:06,766 INFO [Listener at localhost.localdomain/34239] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:06,766 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:06,767 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:06,767 INFO [Listener at localhost.localdomain/34239] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:06,768 INFO [Listener at localhost.localdomain/34239] http.HttpServer(1146): Jetty bound to port 37339 2023-07-12 19:17:06,768 INFO [Listener at localhost.localdomain/34239] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:06,783 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,784 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70301262{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:06,784 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,784 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2937f1b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:06,793 INFO [Listener at localhost.localdomain/34239] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:06,794 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:06,794 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:06,795 INFO [Listener at localhost.localdomain/34239] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 19:17:06,796 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:06,797 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2c66c00c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:06,798 INFO [Listener at localhost.localdomain/34239] server.AbstractConnector(333): Started ServerConnector@4ba91794{HTTP/1.1, (http/1.1)}{0.0.0.0:37339} 2023-07-12 19:17:06,798 INFO [Listener at localhost.localdomain/34239] server.Server(415): Started @8771ms 2023-07-12 19:17:06,808 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:06,812 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4805be52{HTTP/1.1, (http/1.1)}{0.0.0.0:46083} 2023-07-12 19:17:06,812 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @8785ms 2023-07-12 19:17:06,812 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:06,825 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 19:17:06,826 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:06,850 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:06,850 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:06,850 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:06,850 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:06,851 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:06,852 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:06,855 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,33033,1689189424308 from backup master directory 2023-07-12 19:17:06,855 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:06,859 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:06,860 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 19:17:06,861 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:06,861 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:06,865 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-12 19:17:06,866 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-12 19:17:06,998 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/hbase.id with ID: 78e9d107-58d5-4c9c-92d3-0848dd0d4f4d 2023-07-12 19:17:07,057 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:07,081 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:07,143 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1c69f937 to 127.0.0.1:52922 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:07,170 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ed394, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:07,195 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:07,197 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 19:17:07,219 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-12 19:17:07,219 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-12 19:17:07,221 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 19:17:07,226 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-12 19:17:07,227 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:07,269 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store-tmp 2023-07-12 19:17:07,314 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:07,315 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 19:17:07,315 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:07,315 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:07,315 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 19:17:07,315 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:07,315 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:07,315 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:07,320 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/WALs/jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:07,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33033%2C1689189424308, suffix=, logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/WALs/jenkins-hbase20.apache.org,33033,1689189424308, archiveDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/oldWALs, maxLogs=10 2023-07-12 19:17:07,420 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK] 2023-07-12 19:17:07,421 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK] 2023-07-12 19:17:07,420 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK] 2023-07-12 19:17:07,431 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-12 19:17:07,511 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/WALs/jenkins-hbase20.apache.org,33033,1689189424308/jenkins-hbase20.apache.org%2C33033%2C1689189424308.1689189427354 2023-07-12 19:17:07,512 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK], DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK], DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK]] 2023-07-12 19:17:07,512 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:07,513 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:07,516 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:07,518 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:07,591 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:07,600 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 19:17:07,631 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 19:17:07,645 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:07,650 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:07,653 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:07,670 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:07,674 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:07,675 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10888782720, jitterRate=0.01409691572189331}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:07,675 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:07,676 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 19:17:07,703 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 19:17:07,703 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 19:17:07,707 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 19:17:07,710 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-12 19:17:07,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 34 msec 2023-07-12 19:17:07,744 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 19:17:07,777 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 19:17:07,784 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 19:17:07,795 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 19:17:07,803 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 19:17:07,812 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 19:17:07,815 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:07,816 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 19:17:07,817 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 19:17:07,834 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 19:17:07,839 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:07,839 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:07,839 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:07,839 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:07,839 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:07,840 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,33033,1689189424308, sessionid=0x100829d951f0000, setting cluster-up flag (Was=false) 2023-07-12 19:17:07,861 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:07,866 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 19:17:07,868 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:07,881 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:07,884 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 19:17:07,886 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:07,888 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.hbase-snapshot/.tmp 2023-07-12 19:17:07,907 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(951): ClusterId : 78e9d107-58d5-4c9c-92d3-0848dd0d4f4d 2023-07-12 19:17:07,911 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(951): ClusterId : 78e9d107-58d5-4c9c-92d3-0848dd0d4f4d 2023-07-12 19:17:07,911 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(951): ClusterId : 78e9d107-58d5-4c9c-92d3-0848dd0d4f4d 2023-07-12 19:17:07,923 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:07,927 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:07,927 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:07,932 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:07,932 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:07,932 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:07,932 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:07,932 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:07,932 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:07,936 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:07,937 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:07,937 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:07,938 DEBUG [RS:0;jenkins-hbase20:39963] zookeeper.ReadOnlyZKClient(139): Connect 0x5b93cbc6 to 127.0.0.1:52922 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:07,939 DEBUG [RS:2;jenkins-hbase20:36571] zookeeper.ReadOnlyZKClient(139): Connect 0x370a8bb1 to 127.0.0.1:52922 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:07,939 DEBUG [RS:1;jenkins-hbase20:43021] zookeeper.ReadOnlyZKClient(139): Connect 0x736bb31c to 127.0.0.1:52922 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:07,953 DEBUG [RS:1;jenkins-hbase20:43021] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7dac5301, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:07,954 DEBUG [RS:1;jenkins-hbase20:43021] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7993d86c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:07,955 DEBUG [RS:0;jenkins-hbase20:39963] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55543ec4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:07,955 DEBUG [RS:0;jenkins-hbase20:39963] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@516ba0b6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:07,955 DEBUG [RS:2;jenkins-hbase20:36571] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39cd6320, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:07,956 DEBUG [RS:2;jenkins-hbase20:36571] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3df2e0c1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:07,990 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:39963 2023-07-12 19:17:07,998 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:43021 2023-07-12 19:17:07,994 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:36571 2023-07-12 19:17:07,999 INFO [RS:2;jenkins-hbase20:36571] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:07,999 INFO [RS:0;jenkins-hbase20:39963] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:08,000 INFO [RS:0;jenkins-hbase20:39963] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:08,000 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:08,000 INFO [RS:1;jenkins-hbase20:43021] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:08,001 INFO [RS:1;jenkins-hbase20:43021] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:08,001 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:08,002 INFO [RS:2;jenkins-hbase20:36571] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:08,003 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:08,005 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:39963, startcode=1689189426501 2023-07-12 19:17:08,005 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:36571, startcode=1689189426727 2023-07-12 19:17:08,010 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:43021, startcode=1689189426641 2023-07-12 19:17:08,011 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 19:17:08,024 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 19:17:08,031 DEBUG [RS:1;jenkins-hbase20:43021] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:08,031 DEBUG [RS:0;jenkins-hbase20:39963] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:08,050 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 19:17:08,051 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 19:17:08,031 DEBUG [RS:2;jenkins-hbase20:36571] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:08,051 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:08,135 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39371, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:08,135 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36619, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:08,135 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36527, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:08,152 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:08,167 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:08,167 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:08,187 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:08,206 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 19:17:08,206 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 19:17:08,206 WARN [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 19:17:08,206 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 19:17:08,206 WARN [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 19:17:08,206 WARN [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-12 19:17:08,243 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 19:17:08,254 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 19:17:08,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 19:17:08,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 19:17:08,257 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:08,257 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:08,257 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:08,257 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:08,258 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-12 19:17:08,258 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,258 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:08,258 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,291 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689189458291 2023-07-12 19:17:08,294 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:08,295 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 19:17:08,295 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 19:17:08,298 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:08,300 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 19:17:08,308 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:36571, startcode=1689189426727 2023-07-12 19:17:08,313 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:08,308 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:43021, startcode=1689189426641 2023-07-12 19:17:08,316 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:08,317 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:39963, startcode=1689189426501 2023-07-12 19:17:08,319 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 19:17:08,319 WARN [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-12 19:17:08,322 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 19:17:08,322 WARN [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-12 19:17:08,324 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:08,326 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(2830): Master is not running yet 2023-07-12 19:17:08,326 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 19:17:08,326 WARN [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-12 19:17:08,327 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 19:17:08,328 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 19:17:08,328 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 19:17:08,330 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,332 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 19:17:08,335 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 19:17:08,335 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 19:17:08,339 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 19:17:08,340 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 19:17:08,347 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189428342,5,FailOnTimeoutGroup] 2023-07-12 19:17:08,348 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189428347,5,FailOnTimeoutGroup] 2023-07-12 19:17:08,348 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,348 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 19:17:08,351 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,351 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,456 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:08,461 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:08,462 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0 2023-07-12 19:17:08,520 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:36571, startcode=1689189426727 2023-07-12 19:17:08,524 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:43021, startcode=1689189426641 2023-07-12 19:17:08,528 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:08,528 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,530 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:08,531 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:39963, startcode=1689189426501 2023-07-12 19:17:08,532 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 19:17:08,533 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 19:17:08,536 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/info 2023-07-12 19:17:08,537 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,538 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:08,538 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 19:17:08,538 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 19:17:08,539 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0 2023-07-12 19:17:08,539 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43233 2023-07-12 19:17:08,539 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,540 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0 2023-07-12 19:17:08,540 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:08,539 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-12 19:17:08,541 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 19:17:08,540 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43233 2023-07-12 19:17:08,542 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-12 19:17:08,544 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0 2023-07-12 19:17:08,545 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43233 2023-07-12 19:17:08,545 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:08,545 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-12 19:17:08,545 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 19:17:08,549 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:08,551 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:08,552 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 19:17:08,554 DEBUG [RS:0;jenkins-hbase20:39963] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:08,554 WARN [RS:0;jenkins-hbase20:39963] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:08,555 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 19:17:08,555 INFO [RS:0;jenkins-hbase20:39963] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:08,555 DEBUG [RS:1;jenkins-hbase20:43021] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,555 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,555 DEBUG [RS:2;jenkins-hbase20:36571] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,556 WARN [RS:2;jenkins-hbase20:36571] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:08,555 WARN [RS:1;jenkins-hbase20:43021] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:08,556 INFO [RS:2;jenkins-hbase20:36571] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:08,557 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,558 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43021,1689189426641] 2023-07-12 19:17:08,558 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,39963,1689189426501] 2023-07-12 19:17:08,557 INFO [RS:1;jenkins-hbase20:43021] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:08,558 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36571,1689189426727] 2023-07-12 19:17:08,558 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,565 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/table 2023-07-12 19:17:08,566 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 19:17:08,572 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:08,574 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740 2023-07-12 19:17:08,576 DEBUG [RS:0;jenkins-hbase20:39963] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,586 DEBUG [RS:1;jenkins-hbase20:43021] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,587 DEBUG [RS:0;jenkins-hbase20:39963] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,576 DEBUG [RS:2;jenkins-hbase20:36571] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,587 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740 2023-07-12 19:17:08,588 DEBUG [RS:0;jenkins-hbase20:39963] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,588 DEBUG [RS:1;jenkins-hbase20:43021] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,588 DEBUG [RS:1;jenkins-hbase20:43021] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,588 DEBUG [RS:2;jenkins-hbase20:36571] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,590 DEBUG [RS:2;jenkins-hbase20:36571] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,593 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 19:17:08,596 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 19:17:08,603 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:08,604 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:08,604 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:08,603 DEBUG [RS:2;jenkins-hbase20:36571] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:08,607 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9722538720, jitterRate=-0.0945180207490921}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 19:17:08,607 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 19:17:08,607 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 19:17:08,607 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 19:17:08,608 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 19:17:08,608 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 19:17:08,608 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 19:17:08,615 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:08,616 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 19:17:08,623 INFO [RS:2;jenkins-hbase20:36571] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:08,623 INFO [RS:1;jenkins-hbase20:43021] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:08,623 INFO [RS:0;jenkins-hbase20:39963] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:08,627 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:08,628 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 19:17:08,677 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 19:17:08,691 INFO [RS:2;jenkins-hbase20:36571] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:08,691 INFO [RS:0;jenkins-hbase20:39963] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:08,695 INFO [RS:1;jenkins-hbase20:43021] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:08,696 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 19:17:08,701 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 19:17:08,709 INFO [RS:0;jenkins-hbase20:39963] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:08,709 INFO [RS:1;jenkins-hbase20:43021] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:08,711 INFO [RS:0;jenkins-hbase20:39963] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,709 INFO [RS:2;jenkins-hbase20:36571] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:08,711 INFO [RS:1;jenkins-hbase20:43021] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,712 INFO [RS:2;jenkins-hbase20:36571] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,715 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:08,716 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:08,716 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:08,726 INFO [RS:2;jenkins-hbase20:36571] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,726 INFO [RS:0;jenkins-hbase20:39963] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,726 INFO [RS:1;jenkins-hbase20:43021] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,726 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,726 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,727 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,727 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,727 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,727 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:08,728 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:08,728 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,728 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,729 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,729 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,729 DEBUG [RS:2;jenkins-hbase20:36571] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,729 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,729 DEBUG [RS:0;jenkins-hbase20:39963] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,731 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:08,731 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,731 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,731 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,731 DEBUG [RS:1;jenkins-hbase20:43021] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:08,738 INFO [RS:1;jenkins-hbase20:43021] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:0;jenkins-hbase20:39963] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:1;jenkins-hbase20:43021] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:2;jenkins-hbase20:36571] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:2;jenkins-hbase20:36571] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:2;jenkins-hbase20:36571] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:0;jenkins-hbase20:39963] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:1;jenkins-hbase20:43021] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,739 INFO [RS:0;jenkins-hbase20:39963] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,761 INFO [RS:0;jenkins-hbase20:39963] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:08,762 INFO [RS:1;jenkins-hbase20:43021] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:08,762 INFO [RS:2;jenkins-hbase20:36571] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:08,765 INFO [RS:2;jenkins-hbase20:36571] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36571,1689189426727-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,765 INFO [RS:1;jenkins-hbase20:43021] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43021,1689189426641-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,765 INFO [RS:0;jenkins-hbase20:39963] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39963,1689189426501-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:08,806 INFO [RS:0;jenkins-hbase20:39963] regionserver.Replication(203): jenkins-hbase20.apache.org,39963,1689189426501 started 2023-07-12 19:17:08,807 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,39963,1689189426501, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:39963, sessionid=0x100829d951f0001 2023-07-12 19:17:08,807 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:08,807 DEBUG [RS:0;jenkins-hbase20:39963] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,807 DEBUG [RS:0;jenkins-hbase20:39963] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39963,1689189426501' 2023-07-12 19:17:08,807 DEBUG [RS:0;jenkins-hbase20:39963] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:08,808 DEBUG [RS:0;jenkins-hbase20:39963] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:08,809 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:08,809 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:08,809 DEBUG [RS:0;jenkins-hbase20:39963] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:08,809 DEBUG [RS:0;jenkins-hbase20:39963] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,39963,1689189426501' 2023-07-12 19:17:08,809 DEBUG [RS:0;jenkins-hbase20:39963] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:08,809 INFO [RS:2;jenkins-hbase20:36571] regionserver.Replication(203): jenkins-hbase20.apache.org,36571,1689189426727 started 2023-07-12 19:17:08,809 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36571,1689189426727, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36571, sessionid=0x100829d951f0003 2023-07-12 19:17:08,809 DEBUG [RS:0;jenkins-hbase20:39963] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:08,809 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:08,810 DEBUG [RS:2;jenkins-hbase20:36571] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,815 INFO [RS:1;jenkins-hbase20:43021] regionserver.Replication(203): jenkins-hbase20.apache.org,43021,1689189426641 started 2023-07-12 19:17:08,827 DEBUG [RS:0;jenkins-hbase20:39963] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:08,833 INFO [RS:0;jenkins-hbase20:39963] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:08,833 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43021,1689189426641, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43021, sessionid=0x100829d951f0002 2023-07-12 19:17:08,815 DEBUG [RS:2;jenkins-hbase20:36571] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36571,1689189426727' 2023-07-12 19:17:08,833 INFO [RS:0;jenkins-hbase20:39963] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:08,833 DEBUG [RS:2;jenkins-hbase20:36571] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:08,833 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:08,833 DEBUG [RS:1;jenkins-hbase20:43021] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,833 DEBUG [RS:1;jenkins-hbase20:43021] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43021,1689189426641' 2023-07-12 19:17:08,833 DEBUG [RS:1;jenkins-hbase20:43021] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:08,834 DEBUG [RS:1;jenkins-hbase20:43021] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:08,835 DEBUG [RS:2;jenkins-hbase20:36571] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:08,835 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:08,835 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:08,835 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:08,835 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:08,835 DEBUG [RS:2;jenkins-hbase20:36571] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:08,837 DEBUG [RS:2;jenkins-hbase20:36571] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36571,1689189426727' 2023-07-12 19:17:08,836 DEBUG [RS:1;jenkins-hbase20:43021] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:08,838 DEBUG [RS:2;jenkins-hbase20:36571] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:08,842 DEBUG [RS:1;jenkins-hbase20:43021] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43021,1689189426641' 2023-07-12 19:17:08,842 DEBUG [RS:1;jenkins-hbase20:43021] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:08,843 DEBUG [RS:2;jenkins-hbase20:36571] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:08,843 DEBUG [RS:1;jenkins-hbase20:43021] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:08,844 DEBUG [RS:1;jenkins-hbase20:43021] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:08,844 INFO [RS:1;jenkins-hbase20:43021] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:08,845 DEBUG [RS:2;jenkins-hbase20:36571] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:08,845 INFO [RS:2;jenkins-hbase20:36571] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:08,845 INFO [RS:1;jenkins-hbase20:43021] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:08,845 INFO [RS:2;jenkins-hbase20:36571] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:08,854 DEBUG [jenkins-hbase20:33033] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 19:17:08,872 DEBUG [jenkins-hbase20:33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:08,874 DEBUG [jenkins-hbase20:33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:08,874 DEBUG [jenkins-hbase20:33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:08,874 DEBUG [jenkins-hbase20:33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:08,874 DEBUG [jenkins-hbase20:33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:08,880 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43021,1689189426641, state=OPENING 2023-07-12 19:17:08,890 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 19:17:08,891 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:08,891 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 19:17:08,896 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:08,948 INFO [RS:0;jenkins-hbase20:39963] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39963%2C1689189426501, suffix=, logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,39963,1689189426501, archiveDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs, maxLogs=32 2023-07-12 19:17:08,949 INFO [RS:2;jenkins-hbase20:36571] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36571%2C1689189426727, suffix=, logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,36571,1689189426727, archiveDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs, maxLogs=32 2023-07-12 19:17:08,952 INFO [RS:1;jenkins-hbase20:43021] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43021%2C1689189426641, suffix=, logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,43021,1689189426641, archiveDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs, maxLogs=32 2023-07-12 19:17:08,996 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK] 2023-07-12 19:17:08,999 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK] 2023-07-12 19:17:09,000 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK] 2023-07-12 19:17:09,001 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK] 2023-07-12 19:17:09,011 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK] 2023-07-12 19:17:09,012 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK] 2023-07-12 19:17:09,013 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK] 2023-07-12 19:17:09,014 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK] 2023-07-12 19:17:09,015 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK] 2023-07-12 19:17:09,042 INFO [RS:2;jenkins-hbase20:36571] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,36571,1689189426727/jenkins-hbase20.apache.org%2C36571%2C1689189426727.1689189428954 2023-07-12 19:17:09,042 INFO [RS:0;jenkins-hbase20:39963] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,39963,1689189426501/jenkins-hbase20.apache.org%2C39963%2C1689189426501.1689189428954 2023-07-12 19:17:09,046 DEBUG [RS:2;jenkins-hbase20:36571] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK], DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK], DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK]] 2023-07-12 19:17:09,046 INFO [RS:1;jenkins-hbase20:43021] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,43021,1689189426641/jenkins-hbase20.apache.org%2C43021%2C1689189426641.1689189428954 2023-07-12 19:17:09,050 DEBUG [RS:0;jenkins-hbase20:39963] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK], DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK], DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK]] 2023-07-12 19:17:09,054 DEBUG [RS:1;jenkins-hbase20:43021] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK], DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK], DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK]] 2023-07-12 19:17:09,094 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:09,098 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:09,103 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38614, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:09,119 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 19:17:09,123 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:09,127 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43021%2C1689189426641.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,43021,1689189426641, archiveDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs, maxLogs=32 2023-07-12 19:17:09,149 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK] 2023-07-12 19:17:09,150 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK] 2023-07-12 19:17:09,158 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK] 2023-07-12 19:17:09,175 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,43021,1689189426641/jenkins-hbase20.apache.org%2C43021%2C1689189426641.meta.1689189429128.meta 2023-07-12 19:17:09,178 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK], DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK], DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK]] 2023-07-12 19:17:09,178 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:09,181 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 19:17:09,187 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 19:17:09,190 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 19:17:09,197 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 19:17:09,197 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:09,197 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 19:17:09,197 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 19:17:09,203 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 19:17:09,205 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/info 2023-07-12 19:17:09,205 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/info 2023-07-12 19:17:09,206 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 19:17:09,207 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:09,207 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 19:17:09,210 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:09,210 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:09,211 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 19:17:09,213 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:09,213 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 19:17:09,215 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/table 2023-07-12 19:17:09,215 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/table 2023-07-12 19:17:09,216 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 19:17:09,217 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:09,221 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740 2023-07-12 19:17:09,225 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740 2023-07-12 19:17:09,229 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 19:17:09,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 19:17:09,239 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12038015200, jitterRate=0.12112753093242645}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 19:17:09,240 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 19:17:09,262 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689189429083 2023-07-12 19:17:09,298 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 19:17:09,300 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 19:17:09,300 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43021,1689189426641, state=OPEN 2023-07-12 19:17:09,303 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 19:17:09,304 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 19:17:09,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 19:17:09,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43021,1689189426641 in 408 msec 2023-07-12 19:17:09,321 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 19:17:09,321 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 634 msec 2023-07-12 19:17:09,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.2680 sec 2023-07-12 19:17:09,328 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689189429328, completionTime=-1 2023-07-12 19:17:09,328 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 19:17:09,328 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 19:17:09,374 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:09,379 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38618, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:09,409 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:09,425 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 19:17:09,427 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 19:17:09,429 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 19:17:09,429 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689189489429 2023-07-12 19:17:09,429 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689189549429 2023-07-12 19:17:09,429 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 100 msec 2023-07-12 19:17:09,464 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33033,1689189424308-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:09,465 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33033,1689189424308-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:09,466 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33033,1689189424308-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:09,467 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:09,468 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:33033, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:09,469 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:09,475 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:09,497 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 19:17:09,502 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:09,507 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4 empty. 2023-07-12 19:17:09,508 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:09,509 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 19:17:09,517 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 19:17:09,517 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:09,521 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 19:17:09,524 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:09,529 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:09,544 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:09,545 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 empty. 2023-07-12 19:17:09,552 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:09,552 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 19:17:09,602 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:09,612 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 396ab33375d72981083bc36f18ff15d4, NAME => 'hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:09,661 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:09,669 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 80f898828c5a9814a93d19dfb7ad9318, NAME => 'hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:09,681 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:09,681 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 396ab33375d72981083bc36f18ff15d4, disabling compactions & flushes 2023-07-12 19:17:09,682 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:09,682 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:09,682 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. after waiting 0 ms 2023-07-12 19:17:09,682 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:09,682 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:09,682 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 396ab33375d72981083bc36f18ff15d4: 2023-07-12 19:17:09,691 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:09,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:09,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 80f898828c5a9814a93d19dfb7ad9318, disabling compactions & flushes 2023-07-12 19:17:09,713 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:09,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:09,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. after waiting 0 ms 2023-07-12 19:17:09,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:09,713 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:09,713 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 80f898828c5a9814a93d19dfb7ad9318: 2023-07-12 19:17:09,723 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:09,733 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189429703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189429703"}]},"ts":"1689189429703"} 2023-07-12 19:17:09,733 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189429725"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189429725"}]},"ts":"1689189429725"} 2023-07-12 19:17:09,775 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:09,780 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:09,784 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:09,787 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:09,789 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189429787"}]},"ts":"1689189429787"} 2023-07-12 19:17:09,789 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189429780"}]},"ts":"1689189429780"} 2023-07-12 19:17:09,793 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 19:17:09,799 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:09,800 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:09,800 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 19:17:09,800 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:09,800 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:09,800 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:09,802 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=396ab33375d72981083bc36f18ff15d4, ASSIGN}] 2023-07-12 19:17:09,804 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:09,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:09,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:09,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:09,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:09,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, ASSIGN}] 2023-07-12 19:17:09,807 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=396ab33375d72981083bc36f18ff15d4, ASSIGN 2023-07-12 19:17:09,810 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, ASSIGN 2023-07-12 19:17:09,810 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=396ab33375d72981083bc36f18ff15d4, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:09,812 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:09,813 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 19:17:09,816 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=396ab33375d72981083bc36f18ff15d4, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:09,816 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:09,816 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189429815"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189429815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189429815"}]},"ts":"1689189429815"} 2023-07-12 19:17:09,816 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189429816"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189429816"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189429816"}]},"ts":"1689189429816"} 2023-07-12 19:17:09,819 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 396ab33375d72981083bc36f18ff15d4, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:09,821 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:09,987 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:09,987 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:10,001 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:53490, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:10,011 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:10,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 396ab33375d72981083bc36f18ff15d4, NAME => 'hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:10,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 19:17:10,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. service=MultiRowMutationService 2023-07-12 19:17:10,023 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:10,023 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 19:17:10,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:10,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:10,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80f898828c5a9814a93d19dfb7ad9318, NAME => 'hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:10,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:10,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:10,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:10,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:10,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:10,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:10,049 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:10,054 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info 2023-07-12 19:17:10,054 INFO [StoreOpener-396ab33375d72981083bc36f18ff15d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:10,054 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info 2023-07-12 19:17:10,055 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80f898828c5a9814a93d19dfb7ad9318 columnFamilyName info 2023-07-12 19:17:10,056 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(310): Store=80f898828c5a9814a93d19dfb7ad9318/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:10,059 DEBUG [StoreOpener-396ab33375d72981083bc36f18ff15d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/m 2023-07-12 19:17:10,060 DEBUG [StoreOpener-396ab33375d72981083bc36f18ff15d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/m 2023-07-12 19:17:10,061 INFO [StoreOpener-396ab33375d72981083bc36f18ff15d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 396ab33375d72981083bc36f18ff15d4 columnFamilyName m 2023-07-12 19:17:10,062 INFO [StoreOpener-396ab33375d72981083bc36f18ff15d4-1] regionserver.HStore(310): Store=396ab33375d72981083bc36f18ff15d4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:10,062 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:10,063 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:10,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:10,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:10,072 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:10,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:10,084 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:10,085 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 80f898828c5a9814a93d19dfb7ad9318; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12057335040, jitterRate=0.12292683124542236}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:10,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 80f898828c5a9814a93d19dfb7ad9318: 2023-07-12 19:17:10,088 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318., pid=9, masterSystemTime=1689189429987 2023-07-12 19:17:10,092 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:10,094 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 396ab33375d72981083bc36f18ff15d4; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1dafdbd1, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:10,094 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 396ab33375d72981083bc36f18ff15d4: 2023-07-12 19:17:10,102 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4., pid=8, masterSystemTime=1689189429978 2023-07-12 19:17:10,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:10,106 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:10,108 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:10,108 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:10,108 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:10,109 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189430107"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189430107"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189430107"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189430107"}]},"ts":"1689189430107"} 2023-07-12 19:17:10,110 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=396ab33375d72981083bc36f18ff15d4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:10,110 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189430109"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189430109"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189430109"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189430109"}]},"ts":"1689189430109"} 2023-07-12 19:17:10,127 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 19:17:10,128 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,36571,1689189426727 in 294 msec 2023-07-12 19:17:10,134 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 19:17:10,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 396ab33375d72981083bc36f18ff15d4, server=jenkins-hbase20.apache.org,43021,1689189426641 in 301 msec 2023-07-12 19:17:10,138 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 19:17:10,138 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, ASSIGN in 323 msec 2023-07-12 19:17:10,141 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:10,142 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189430141"}]},"ts":"1689189430141"} 2023-07-12 19:17:10,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 19:17:10,143 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=396ab33375d72981083bc36f18ff15d4, ASSIGN in 333 msec 2023-07-12 19:17:10,145 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:10,146 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189430145"}]},"ts":"1689189430145"} 2023-07-12 19:17:10,146 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 19:17:10,149 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 19:17:10,149 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:10,154 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:10,155 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 633 msec 2023-07-12 19:17:10,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 743 msec 2023-07-12 19:17:10,225 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 19:17:10,226 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:10,226 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:10,257 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:10,272 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:53494, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:10,295 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 19:17:10,295 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 19:17:10,298 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 19:17:10,361 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:10,369 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 88 msec 2023-07-12 19:17:10,383 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 19:17:10,400 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:10,409 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 24 msec 2023-07-12 19:17:10,420 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 19:17:10,424 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 19:17:10,424 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.563sec 2023-07-12 19:17:10,429 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 19:17:10,430 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 19:17:10,431 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 19:17:10,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33033,1689189424308-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 19:17:10,434 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33033,1689189424308-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 19:17:10,444 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:10,445 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:10,448 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 19:17:10,449 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ReadOnlyZKClient(139): Connect 0x4683d6ee to 127.0.0.1:52922 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:10,455 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 19:17:10,456 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 19:17:10,470 DEBUG [Listener at localhost.localdomain/34239] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25ac25f2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:10,498 DEBUG [hconnection-0x7e8a142d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:10,514 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38634, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:10,525 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:10,527 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:10,539 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 19:17:10,557 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:37696, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 19:17:10,575 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 19:17:10,576 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:10,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-12 19:17:10,584 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ReadOnlyZKClient(139): Connect 0x634bedbe to 127.0.0.1:52922 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:10,598 DEBUG [Listener at localhost.localdomain/34239] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4df1942c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:10,599 INFO [Listener at localhost.localdomain/34239] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52922 2023-07-12 19:17:10,621 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:10,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x100829d951f000a connected 2023-07-12 19:17:10,667 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=681, MaxFileDescriptor=60000, SystemLoadAverage=463, ProcessCount=171, AvailableMemoryMB=4251 2023-07-12 19:17:10,670 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-12 19:17:10,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:10,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:10,758 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 19:17:10,769 INFO [Listener at localhost.localdomain/34239] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:10,769 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:10,769 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:10,769 INFO [Listener at localhost.localdomain/34239] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:10,770 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:10,770 INFO [Listener at localhost.localdomain/34239] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:10,770 INFO [Listener at localhost.localdomain/34239] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:10,774 INFO [Listener at localhost.localdomain/34239] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36311 2023-07-12 19:17:10,775 INFO [Listener at localhost.localdomain/34239] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:10,784 DEBUG [Listener at localhost.localdomain/34239] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:10,786 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:10,800 INFO [Listener at localhost.localdomain/34239] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:10,803 INFO [Listener at localhost.localdomain/34239] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36311 connecting to ZooKeeper ensemble=127.0.0.1:52922 2023-07-12 19:17:10,816 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(162): regionserver:363110x0, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:10,817 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(162): regionserver:363110x0, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 19:17:10,818 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:363110x0, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:10,824 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ZKUtil(164): regionserver:363110x0, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:10,827 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36311-0x100829d951f000b connected 2023-07-12 19:17:10,828 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36311 2023-07-12 19:17:10,828 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36311 2023-07-12 19:17:10,829 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36311 2023-07-12 19:17:10,834 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36311 2023-07-12 19:17:10,838 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36311 2023-07-12 19:17:10,841 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:10,841 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:10,841 INFO [Listener at localhost.localdomain/34239] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:10,842 INFO [Listener at localhost.localdomain/34239] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:10,842 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:10,842 INFO [Listener at localhost.localdomain/34239] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:10,842 INFO [Listener at localhost.localdomain/34239] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:10,843 INFO [Listener at localhost.localdomain/34239] http.HttpServer(1146): Jetty bound to port 34593 2023-07-12 19:17:10,843 INFO [Listener at localhost.localdomain/34239] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:10,865 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:10,865 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b624c2b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:10,866 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:10,866 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41054beb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:10,891 INFO [Listener at localhost.localdomain/34239] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:10,892 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:10,892 INFO [Listener at localhost.localdomain/34239] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:10,893 INFO [Listener at localhost.localdomain/34239] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:10,895 INFO [Listener at localhost.localdomain/34239] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:10,896 INFO [Listener at localhost.localdomain/34239] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4d30d5bf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:10,898 INFO [Listener at localhost.localdomain/34239] server.AbstractConnector(333): Started ServerConnector@29e86b8d{HTTP/1.1, (http/1.1)}{0.0.0.0:34593} 2023-07-12 19:17:10,898 INFO [Listener at localhost.localdomain/34239] server.Server(415): Started @12871ms 2023-07-12 19:17:10,923 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(951): ClusterId : 78e9d107-58d5-4c9c-92d3-0848dd0d4f4d 2023-07-12 19:17:10,923 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:10,926 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:10,926 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:10,928 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:10,930 DEBUG [RS:3;jenkins-hbase20:36311] zookeeper.ReadOnlyZKClient(139): Connect 0x3a975367 to 127.0.0.1:52922 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:10,954 DEBUG [RS:3;jenkins-hbase20:36311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3450aa45, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:10,954 DEBUG [RS:3;jenkins-hbase20:36311] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@306f3817, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:10,966 DEBUG [RS:3;jenkins-hbase20:36311] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:36311 2023-07-12 19:17:10,966 INFO [RS:3;jenkins-hbase20:36311] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:10,966 INFO [RS:3;jenkins-hbase20:36311] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:10,966 DEBUG [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:10,967 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33033,1689189424308 with isa=jenkins-hbase20.apache.org/148.251.75.209:36311, startcode=1689189430768 2023-07-12 19:17:10,968 DEBUG [RS:3;jenkins-hbase20:36311] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:10,983 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:46515, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:10,984 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33033] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:10,984 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:10,985 DEBUG [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0 2023-07-12 19:17:10,985 DEBUG [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43233 2023-07-12 19:17:10,985 DEBUG [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-12 19:17:10,992 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:10,992 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:10,992 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:10,992 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:10,993 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:10,995 DEBUG [RS:3;jenkins-hbase20:36311] zookeeper.ZKUtil(162): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:10,995 WARN [RS:3;jenkins-hbase20:36311] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:10,995 INFO [RS:3;jenkins-hbase20:36311] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:10,995 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36311,1689189430768] 2023-07-12 19:17:10,995 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 19:17:10,995 DEBUG [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:10,996 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:10,996 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:10,996 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:11,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:11,015 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33033,1689189424308] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 19:17:11,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:11,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:11,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:11,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:11,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:11,017 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:11,017 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:11,017 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:11,032 DEBUG [RS:3;jenkins-hbase20:36311] zookeeper.ZKUtil(162): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:11,034 DEBUG [RS:3;jenkins-hbase20:36311] zookeeper.ZKUtil(162): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:11,035 DEBUG [RS:3;jenkins-hbase20:36311] zookeeper.ZKUtil(162): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:11,035 DEBUG [RS:3;jenkins-hbase20:36311] zookeeper.ZKUtil(162): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:11,037 DEBUG [RS:3;jenkins-hbase20:36311] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:11,038 INFO [RS:3;jenkins-hbase20:36311] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:11,051 INFO [RS:3;jenkins-hbase20:36311] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:11,059 INFO [RS:3;jenkins-hbase20:36311] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:11,059 INFO [RS:3;jenkins-hbase20:36311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:11,059 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:11,062 INFO [RS:3;jenkins-hbase20:36311] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:11,063 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,063 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,063 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,063 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,063 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,063 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:11,064 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,064 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,064 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,064 DEBUG [RS:3;jenkins-hbase20:36311] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:11,066 INFO [RS:3;jenkins-hbase20:36311] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:11,066 INFO [RS:3;jenkins-hbase20:36311] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:11,066 INFO [RS:3;jenkins-hbase20:36311] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:11,094 INFO [RS:3;jenkins-hbase20:36311] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:11,095 INFO [RS:3;jenkins-hbase20:36311] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36311,1689189430768-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:11,120 INFO [RS:3;jenkins-hbase20:36311] regionserver.Replication(203): jenkins-hbase20.apache.org,36311,1689189430768 started 2023-07-12 19:17:11,120 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36311,1689189430768, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36311, sessionid=0x100829d951f000b 2023-07-12 19:17:11,120 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:11,120 DEBUG [RS:3;jenkins-hbase20:36311] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:11,120 DEBUG [RS:3;jenkins-hbase20:36311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36311,1689189430768' 2023-07-12 19:17:11,120 DEBUG [RS:3;jenkins-hbase20:36311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:11,127 DEBUG [RS:3;jenkins-hbase20:36311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:11,129 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:11,129 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:11,129 DEBUG [RS:3;jenkins-hbase20:36311] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:11,129 DEBUG [RS:3;jenkins-hbase20:36311] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36311,1689189430768' 2023-07-12 19:17:11,129 DEBUG [RS:3;jenkins-hbase20:36311] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:11,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:11,130 DEBUG [RS:3;jenkins-hbase20:36311] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:11,132 DEBUG [RS:3;jenkins-hbase20:36311] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:11,132 INFO [RS:3;jenkins-hbase20:36311] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:11,132 INFO [RS:3;jenkins-hbase20:36311] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:11,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:11,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:11,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:11,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:11,156 DEBUG [hconnection-0x25a58e9d-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:11,168 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38638, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:11,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:11,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:11,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:11,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:11,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:37696 deadline: 1689190631200, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:11,203 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:11,206 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:11,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:11,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:11,209 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:11,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:11,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:11,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:11,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:11,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:11,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:11,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:11,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:11,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:11,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:11,236 INFO [RS:3;jenkins-hbase20:36311] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36311%2C1689189430768, suffix=, logDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,36311,1689189430768, archiveDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs, maxLogs=32 2023-07-12 19:17:11,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:11,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:11,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:11,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:11,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:11,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:11,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:11,285 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK] 2023-07-12 19:17:11,285 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK] 2023-07-12 19:17:11,288 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK] 2023-07-12 19:17:11,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(238): Moving server region 80f898828c5a9814a93d19dfb7ad9318, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:11,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE 2023-07-12 19:17:11,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 19:17:11,299 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE 2023-07-12 19:17:11,300 INFO [RS:3;jenkins-hbase20:36311] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/WALs/jenkins-hbase20.apache.org,36311,1689189430768/jenkins-hbase20.apache.org%2C36311%2C1689189430768.1689189431238 2023-07-12 19:17:11,301 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:11,301 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189431301"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189431301"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189431301"}]},"ts":"1689189431301"} 2023-07-12 19:17:11,301 DEBUG [RS:3;jenkins-hbase20:36311] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36015,DS-c010aef2-0c14-458c-b1aa-5ac124bdef5d,DISK], DatanodeInfoWithStorage[127.0.0.1:35389,DS-47e5ebb6-1f77-4af6-bdfc-1e0f975f2d77,DISK], DatanodeInfoWithStorage[127.0.0.1:42393,DS-486a4b59-ff70-4f76-965f-28d3762f2281,DISK]] 2023-07-12 19:17:11,304 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:11,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:11,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 80f898828c5a9814a93d19dfb7ad9318, disabling compactions & flushes 2023-07-12 19:17:11,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:11,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:11,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. after waiting 0 ms 2023-07-12 19:17:11,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:11,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 80f898828c5a9814a93d19dfb7ad9318 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-12 19:17:11,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/.tmp/info/d7aac827e8d447f9b1eef9d5182ff487 2023-07-12 19:17:11,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/.tmp/info/d7aac827e8d447f9b1eef9d5182ff487 as hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info/d7aac827e8d447f9b1eef9d5182ff487 2023-07-12 19:17:11,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info/d7aac827e8d447f9b1eef9d5182ff487, entries=2, sequenceid=6, filesize=4.8 K 2023-07-12 19:17:11,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 80f898828c5a9814a93d19dfb7ad9318 in 273ms, sequenceid=6, compaction requested=false 2023-07-12 19:17:11,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 19:17:11,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-12 19:17:11,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:11,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 80f898828c5a9814a93d19dfb7ad9318: 2023-07-12 19:17:11,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 80f898828c5a9814a93d19dfb7ad9318 move to jenkins-hbase20.apache.org,39963,1689189426501 record at close sequenceid=6 2023-07-12 19:17:11,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:11,791 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=CLOSED 2023-07-12 19:17:11,792 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189431791"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189431791"}]},"ts":"1689189431791"} 2023-07-12 19:17:11,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 19:17:11,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,36571,1689189426727 in 490 msec 2023-07-12 19:17:11,799 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:11,950 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:11,950 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:11,950 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189431950"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189431950"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189431950"}]},"ts":"1689189431950"} 2023-07-12 19:17:11,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:12,107 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:12,108 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:12,113 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:32926, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:12,121 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:12,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80f898828c5a9814a93d19dfb7ad9318, NAME => 'hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:12,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:12,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:12,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:12,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:12,147 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:12,149 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info 2023-07-12 19:17:12,149 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info 2023-07-12 19:17:12,150 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80f898828c5a9814a93d19dfb7ad9318 columnFamilyName info 2023-07-12 19:17:12,170 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info/d7aac827e8d447f9b1eef9d5182ff487 2023-07-12 19:17:12,171 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(310): Store=80f898828c5a9814a93d19dfb7ad9318/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:12,173 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:12,178 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:12,182 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:12,183 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 80f898828c5a9814a93d19dfb7ad9318; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11577249120, jitterRate=0.0782153457403183}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:12,183 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 80f898828c5a9814a93d19dfb7ad9318: 2023-07-12 19:17:12,184 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318., pid=14, masterSystemTime=1689189432107 2023-07-12 19:17:12,188 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:12,189 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:12,191 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:12,192 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189432190"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189432190"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189432190"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189432190"}]},"ts":"1689189432190"} 2023-07-12 19:17:12,200 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 19:17:12,200 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,39963,1689189426501 in 241 msec 2023-07-12 19:17:12,202 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE in 910 msec 2023-07-12 19:17:12,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-12 19:17:12,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to default 2023-07-12 19:17:12,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:12,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:12,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:12,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:12,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:12,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:12,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:12,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:12,319 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:12,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-12 19:17:12,325 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:12,325 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:12,326 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:12,327 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:12,335 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:12,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:12,343 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:12,343 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:12,344 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:12,344 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:12,344 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:12,344 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 empty. 2023-07-12 19:17:12,344 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed empty. 2023-07-12 19:17:12,345 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 empty. 2023-07-12 19:17:12,345 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb empty. 2023-07-12 19:17:12,345 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 empty. 2023-07-12 19:17:12,346 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:12,346 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:12,346 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:12,346 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:12,348 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:12,348 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 19:17:12,372 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:12,374 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f3e8991941bf8cc6182c695ccc396f36, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:12,374 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 338e8802045d7b2a5da83a95c9f1aff3, NAME => 'Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:12,374 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => d8b4be039bb66e05e5b7e87e85c454ed, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:12,421 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:12,422 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 338e8802045d7b2a5da83a95c9f1aff3, disabling compactions & flushes 2023-07-12 19:17:12,422 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:12,422 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:12,422 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. after waiting 0 ms 2023-07-12 19:17:12,422 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:12,422 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:12,422 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 338e8802045d7b2a5da83a95c9f1aff3: 2023-07-12 19:17:12,424 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 08074a1beba6aeec461717c2440138cb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:12,424 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:12,425 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f3e8991941bf8cc6182c695ccc396f36, disabling compactions & flushes 2023-07-12 19:17:12,425 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:12,425 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:12,425 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. after waiting 0 ms 2023-07-12 19:17:12,425 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:12,425 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:12,425 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f3e8991941bf8cc6182c695ccc396f36: 2023-07-12 19:17:12,425 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b0a19d397667f15760caca207e8c44a2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:12,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:12,468 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:12,470 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 08074a1beba6aeec461717c2440138cb, disabling compactions & flushes 2023-07-12 19:17:12,470 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:12,470 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:12,470 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. after waiting 0 ms 2023-07-12 19:17:12,470 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:12,470 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:12,470 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 08074a1beba6aeec461717c2440138cb: 2023-07-12 19:17:12,473 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:12,473 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b0a19d397667f15760caca207e8c44a2, disabling compactions & flushes 2023-07-12 19:17:12,473 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:12,473 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:12,473 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. after waiting 0 ms 2023-07-12 19:17:12,473 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:12,473 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:12,473 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b0a19d397667f15760caca207e8c44a2: 2023-07-12 19:17:12,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:12,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:12,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing d8b4be039bb66e05e5b7e87e85c454ed, disabling compactions & flushes 2023-07-12 19:17:12,821 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:12,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:12,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. after waiting 0 ms 2023-07-12 19:17:12,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:12,821 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:12,821 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for d8b4be039bb66e05e5b7e87e85c454ed: 2023-07-12 19:17:12,826 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:12,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189432827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189432827"}]},"ts":"1689189432827"} 2023-07-12 19:17:12,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189432827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189432827"}]},"ts":"1689189432827"} 2023-07-12 19:17:12,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189432827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189432827"}]},"ts":"1689189432827"} 2023-07-12 19:17:12,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189432827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189432827"}]},"ts":"1689189432827"} 2023-07-12 19:17:12,827 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189432827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189432827"}]},"ts":"1689189432827"} 2023-07-12 19:17:12,877 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 19:17:12,878 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:12,879 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189432878"}]},"ts":"1689189432878"} 2023-07-12 19:17:12,881 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 19:17:12,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:12,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:12,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:12,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:12,888 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, ASSIGN}] 2023-07-12 19:17:12,892 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, ASSIGN 2023-07-12 19:17:12,892 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, ASSIGN 2023-07-12 19:17:12,893 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, ASSIGN 2023-07-12 19:17:12,894 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, ASSIGN 2023-07-12 19:17:12,897 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:12,898 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:12,898 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:12,899 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, ASSIGN 2023-07-12 19:17:12,899 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:12,901 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:12,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:13,048 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 19:17:13,054 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:13,054 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,055 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433054"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433054"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433054"}]},"ts":"1689189433054"} 2023-07-12 19:17:13,055 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433054"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433054"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433054"}]},"ts":"1689189433054"} 2023-07-12 19:17:13,055 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,055 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433055"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433055"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433055"}]},"ts":"1689189433055"} 2023-07-12 19:17:13,054 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:13,056 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433054"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433054"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433054"}]},"ts":"1689189433054"} 2023-07-12 19:17:13,054 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,057 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433054"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433054"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433054"}]},"ts":"1689189433054"} 2023-07-12 19:17:13,058 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE; OpenRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:13,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=17, state=RUNNABLE; OpenRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:13,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=16, state=RUNNABLE; OpenRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:13,065 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=18, state=RUNNABLE; OpenRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:13,067 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=19, state=RUNNABLE; OpenRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:13,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:13,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f3e8991941bf8cc6182c695ccc396f36, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 19:17:13,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:13,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,227 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:13,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d8b4be039bb66e05e5b7e87e85c454ed, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 19:17:13,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:13,229 INFO [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,231 INFO [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,231 DEBUG [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/f 2023-07-12 19:17:13,231 DEBUG [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/f 2023-07-12 19:17:13,232 INFO [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f3e8991941bf8cc6182c695ccc396f36 columnFamilyName f 2023-07-12 19:17:13,233 DEBUG [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/f 2023-07-12 19:17:13,233 DEBUG [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/f 2023-07-12 19:17:13,233 INFO [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] regionserver.HStore(310): Store=f3e8991941bf8cc6182c695ccc396f36/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:13,234 INFO [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d8b4be039bb66e05e5b7e87e85c454ed columnFamilyName f 2023-07-12 19:17:13,235 INFO [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] regionserver.HStore(310): Store=d8b4be039bb66e05e5b7e87e85c454ed/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:13,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:13,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:13,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f3e8991941bf8cc6182c695ccc396f36; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10595582080, jitterRate=-0.013209521770477295}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:13,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f3e8991941bf8cc6182c695ccc396f36: 2023-07-12 19:17:13,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d8b4be039bb66e05e5b7e87e85c454ed; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10105018400, jitterRate=-0.05889682471752167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:13,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d8b4be039bb66e05e5b7e87e85c454ed: 2023-07-12 19:17:13,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36., pid=24, masterSystemTime=1689189433214 2023-07-12 19:17:13,253 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed., pid=22, masterSystemTime=1689189433216 2023-07-12 19:17:13,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:13,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:13,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:13,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b0a19d397667f15760caca207e8c44a2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 19:17:13,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:13,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,258 INFO [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,259 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:13,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:13,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:13,259 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433259"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189433259"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189433259"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189433259"}]},"ts":"1689189433259"} 2023-07-12 19:17:13,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:13,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08074a1beba6aeec461717c2440138cb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 19:17:13,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:13,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,261 DEBUG [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/f 2023-07-12 19:17:13,262 DEBUG [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/f 2023-07-12 19:17:13,263 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,263 INFO [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b0a19d397667f15760caca207e8c44a2 columnFamilyName f 2023-07-12 19:17:13,263 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433263"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189433263"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189433263"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189433263"}]},"ts":"1689189433263"} 2023-07-12 19:17:13,264 INFO [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] regionserver.HStore(310): Store=b0a19d397667f15760caca207e8c44a2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:13,266 INFO [StoreOpener-08074a1beba6aeec461717c2440138cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,272 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=18 2023-07-12 19:17:13,272 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=18, state=SUCCESS; OpenRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,43021,1689189426641 in 201 msec 2023-07-12 19:17:13,273 DEBUG [StoreOpener-08074a1beba6aeec461717c2440138cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/f 2023-07-12 19:17:13,273 DEBUG [StoreOpener-08074a1beba6aeec461717c2440138cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/f 2023-07-12 19:17:13,273 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=17 2023-07-12 19:17:13,274 INFO [StoreOpener-08074a1beba6aeec461717c2440138cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08074a1beba6aeec461717c2440138cb columnFamilyName f 2023-07-12 19:17:13,274 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=17, state=SUCCESS; OpenRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,39963,1689189426501 in 208 msec 2023-07-12 19:17:13,275 INFO [StoreOpener-08074a1beba6aeec461717c2440138cb-1] regionserver.HStore(310): Store=08074a1beba6aeec461717c2440138cb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:13,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,276 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, ASSIGN in 385 msec 2023-07-12 19:17:13,275 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, ASSIGN in 384 msec 2023-07-12 19:17:13,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:13,298 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened b0a19d397667f15760caca207e8c44a2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10127490400, jitterRate=-0.056803956627845764}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:13,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for b0a19d397667f15760caca207e8c44a2: 2023-07-12 19:17:13,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2., pid=21, masterSystemTime=1689189433214 2023-07-12 19:17:13,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:13,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:13,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,304 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:13,304 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433304"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189433304"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189433304"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189433304"}]},"ts":"1689189433304"} 2023-07-12 19:17:13,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:13,308 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 08074a1beba6aeec461717c2440138cb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9419421920, jitterRate=-0.12274797260761261}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:13,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 08074a1beba6aeec461717c2440138cb: 2023-07-12 19:17:13,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb., pid=25, masterSystemTime=1689189433216 2023-07-12 19:17:13,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-12 19:17:13,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; OpenRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,43021,1689189426641 in 248 msec 2023-07-12 19:17:13,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:13,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:13,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:13,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 338e8802045d7b2a5da83a95c9f1aff3, NAME => 'Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 19:17:13,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:13,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,316 INFO [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,318 DEBUG [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/f 2023-07-12 19:17:13,318 DEBUG [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/f 2023-07-12 19:17:13,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, ASSIGN in 423 msec 2023-07-12 19:17:13,319 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,319 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433318"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189433318"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189433318"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189433318"}]},"ts":"1689189433318"} 2023-07-12 19:17:13,319 INFO [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 338e8802045d7b2a5da83a95c9f1aff3 columnFamilyName f 2023-07-12 19:17:13,320 INFO [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] regionserver.HStore(310): Store=338e8802045d7b2a5da83a95c9f1aff3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:13,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,326 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=19 2023-07-12 19:17:13,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=19, state=SUCCESS; OpenRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,39963,1689189426501 in 255 msec 2023-07-12 19:17:13,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, ASSIGN in 439 msec 2023-07-12 19:17:13,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:13,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 338e8802045d7b2a5da83a95c9f1aff3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11867969280, jitterRate=0.10529077053070068}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:13,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 338e8802045d7b2a5da83a95c9f1aff3: 2023-07-12 19:17:13,334 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3., pid=23, masterSystemTime=1689189433216 2023-07-12 19:17:13,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:13,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:13,339 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,339 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433339"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189433339"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189433339"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189433339"}]},"ts":"1689189433339"} 2023-07-12 19:17:13,345 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=16 2023-07-12 19:17:13,345 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=16, state=SUCCESS; OpenRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,39963,1689189426501 in 280 msec 2023-07-12 19:17:13,349 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-12 19:17:13,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, ASSIGN in 457 msec 2023-07-12 19:17:13,352 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:13,352 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189433352"}]},"ts":"1689189433352"} 2023-07-12 19:17:13,354 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 19:17:13,358 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:13,362 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.0440 sec 2023-07-12 19:17:13,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:13,471 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-12 19:17:13,471 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-12 19:17:13,473 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:13,480 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-12 19:17:13,481 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:13,482 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-12 19:17:13,482 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:13,489 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:13,500 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55182, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:13,504 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:13,522 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:53496, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:13,522 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:13,531 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:32938, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:13,533 DEBUG [Listener at localhost.localdomain/34239] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:13,541 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38640, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:13,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:13,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:13,561 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:13,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:13,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:13,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 338e8802045d7b2a5da83a95c9f1aff3 to RSGroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:13,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:13,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:13,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:13,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:13,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, REOPEN/MOVE 2023-07-12 19:17:13,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region d8b4be039bb66e05e5b7e87e85c454ed to RSGroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,593 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, REOPEN/MOVE 2023-07-12 19:17:13,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:13,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:13,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:13,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:13,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:13,595 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, REOPEN/MOVE 2023-07-12 19:17:13,596 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433595"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433595"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433595"}]},"ts":"1689189433595"} 2023-07-12 19:17:13,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region f3e8991941bf8cc6182c695ccc396f36 to RSGroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,597 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, REOPEN/MOVE 2023-07-12 19:17:13,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:13,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:13,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:13,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:13,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:13,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, REOPEN/MOVE 2023-07-12 19:17:13,600 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 08074a1beba6aeec461717c2440138cb to RSGroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,601 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433600"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433600"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433600"}]},"ts":"1689189433600"} 2023-07-12 19:17:13,601 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:13,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:13,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:13,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:13,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:13,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:13,605 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, REOPEN/MOVE 2023-07-12 19:17:13,607 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:13,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, REOPEN/MOVE 2023-07-12 19:17:13,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region b0a19d397667f15760caca207e8c44a2 to RSGroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:13,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:13,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:13,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:13,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:13,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:13,613 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, REOPEN/MOVE 2023-07-12 19:17:13,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, REOPEN/MOVE 2023-07-12 19:17:13,614 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:13,614 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433614"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433614"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433614"}]},"ts":"1689189433614"} 2023-07-12 19:17:13,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_806716229, current retry=0 2023-07-12 19:17:13,616 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, REOPEN/MOVE 2023-07-12 19:17:13,617 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:13,617 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433617"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433617"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433617"}]},"ts":"1689189433617"} 2023-07-12 19:17:13,621 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:13,621 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433621"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433621"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433621"}]},"ts":"1689189433621"} 2023-07-12 19:17:13,623 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:13,625 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:13,627 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:13,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d8b4be039bb66e05e5b7e87e85c454ed, disabling compactions & flushes 2023-07-12 19:17:13,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:13,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:13,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. after waiting 0 ms 2023-07-12 19:17:13,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:13,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:13,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:13,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d8b4be039bb66e05e5b7e87e85c454ed: 2023-07-12 19:17:13,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding d8b4be039bb66e05e5b7e87e85c454ed move to jenkins-hbase20.apache.org,36311,1689189430768 record at close sequenceid=2 2023-07-12 19:17:13,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:13,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 338e8802045d7b2a5da83a95c9f1aff3, disabling compactions & flushes 2023-07-12 19:17:13,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:13,780 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:13,780 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. after waiting 0 ms 2023-07-12 19:17:13,780 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:13,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,781 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=CLOSED 2023-07-12 19:17:13,781 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433781"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189433781"}]},"ts":"1689189433781"} 2023-07-12 19:17:13,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f3e8991941bf8cc6182c695ccc396f36, disabling compactions & flushes 2023-07-12 19:17:13,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:13,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:13,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. after waiting 0 ms 2023-07-12 19:17:13,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:13,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:13,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:13,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 338e8802045d7b2a5da83a95c9f1aff3: 2023-07-12 19:17:13,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 338e8802045d7b2a5da83a95c9f1aff3 move to jenkins-hbase20.apache.org,36571,1689189426727 record at close sequenceid=2 2023-07-12 19:17:13,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:13,796 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-12 19:17:13,796 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,39963,1689189426501 in 181 msec 2023-07-12 19:17:13,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:13,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f3e8991941bf8cc6182c695ccc396f36: 2023-07-12 19:17:13,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding f3e8991941bf8cc6182c695ccc396f36 move to jenkins-hbase20.apache.org,36571,1689189426727 record at close sequenceid=2 2023-07-12 19:17:13,799 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36311,1689189430768; forceNewPlan=false, retain=false 2023-07-12 19:17:13,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:13,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 08074a1beba6aeec461717c2440138cb, disabling compactions & flushes 2023-07-12 19:17:13,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:13,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:13,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. after waiting 0 ms 2023-07-12 19:17:13,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:13,802 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=CLOSED 2023-07-12 19:17:13,807 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433802"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189433802"}]},"ts":"1689189433802"} 2023-07-12 19:17:13,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:13,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing b0a19d397667f15760caca207e8c44a2, disabling compactions & flushes 2023-07-12 19:17:13,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:13,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:13,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. after waiting 0 ms 2023-07-12 19:17:13,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:13,811 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=CLOSED 2023-07-12 19:17:13,812 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433811"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189433811"}]},"ts":"1689189433811"} 2023-07-12 19:17:13,821 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-12 19:17:13,821 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,39963,1689189426501 in 211 msec 2023-07-12 19:17:13,821 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-12 19:17:13,821 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,43021,1689189426641 in 191 msec 2023-07-12 19:17:13,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:13,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:13,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:13,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 08074a1beba6aeec461717c2440138cb: 2023-07-12 19:17:13,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 08074a1beba6aeec461717c2440138cb move to jenkins-hbase20.apache.org,36571,1689189426727 record at close sequenceid=2 2023-07-12 19:17:13,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:13,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for b0a19d397667f15760caca207e8c44a2: 2023-07-12 19:17:13,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding b0a19d397667f15760caca207e8c44a2 move to jenkins-hbase20.apache.org,36311,1689189430768 record at close sequenceid=2 2023-07-12 19:17:13,829 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:13,830 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:13,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:13,833 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=CLOSED 2023-07-12 19:17:13,833 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433833"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189433833"}]},"ts":"1689189433833"} 2023-07-12 19:17:13,833 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:13,835 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=CLOSED 2023-07-12 19:17:13,835 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433835"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189433835"}]},"ts":"1689189433835"} 2023-07-12 19:17:13,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-12 19:17:13,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,39963,1689189426501 in 210 msec 2023-07-12 19:17:13,847 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-12 19:17:13,848 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:13,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,43021,1689189426641 in 212 msec 2023-07-12 19:17:13,853 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36311,1689189430768; forceNewPlan=false, retain=false 2023-07-12 19:17:13,951 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 19:17:13,952 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:13,952 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:13,952 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:13,952 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:13,953 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433952"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433952"}]},"ts":"1689189433952"} 2023-07-12 19:17:13,953 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433952"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433952"}]},"ts":"1689189433952"} 2023-07-12 19:17:13,953 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189433952"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433952"}]},"ts":"1689189433952"} 2023-07-12 19:17:13,953 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433952"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433952"}]},"ts":"1689189433952"} 2023-07-12 19:17:13,955 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:13,955 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189433952"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189433952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189433952"}]},"ts":"1689189433952"} 2023-07-12 19:17:13,957 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=26, state=RUNNABLE; OpenRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:13,959 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=30, state=RUNNABLE; OpenRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:13,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=32, state=RUNNABLE; OpenRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:13,964 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=28, state=RUNNABLE; OpenRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:13,968 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=27, state=RUNNABLE; OpenRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:14,115 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:14,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f3e8991941bf8cc6182c695ccc396f36, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 19:17:14,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:14,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,117 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:14,117 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:14,117 INFO [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,118 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55184, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:14,119 DEBUG [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/f 2023-07-12 19:17:14,119 DEBUG [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/f 2023-07-12 19:17:14,121 INFO [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f3e8991941bf8cc6182c695ccc396f36 columnFamilyName f 2023-07-12 19:17:14,121 INFO [StoreOpener-f3e8991941bf8cc6182c695ccc396f36-1] regionserver.HStore(310): Store=f3e8991941bf8cc6182c695ccc396f36/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:14,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:14,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b0a19d397667f15760caca207e8c44a2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 19:17:14,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:14,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,126 INFO [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,127 DEBUG [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/f 2023-07-12 19:17:14,127 DEBUG [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/f 2023-07-12 19:17:14,128 INFO [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b0a19d397667f15760caca207e8c44a2 columnFamilyName f 2023-07-12 19:17:14,129 INFO [StoreOpener-b0a19d397667f15760caca207e8c44a2-1] regionserver.HStore(310): Store=b0a19d397667f15760caca207e8c44a2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:14,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f3e8991941bf8cc6182c695ccc396f36; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11683022080, jitterRate=0.0880662202835083}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:14,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f3e8991941bf8cc6182c695ccc396f36: 2023-07-12 19:17:14,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36., pid=39, masterSystemTime=1689189434110 2023-07-12 19:17:14,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:14,137 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:14,137 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:14,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 338e8802045d7b2a5da83a95c9f1aff3, NAME => 'Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 19:17:14,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:14,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened b0a19d397667f15760caca207e8c44a2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9445309120, jitterRate=-0.12033703923225403}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:14,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for b0a19d397667f15760caca207e8c44a2: 2023-07-12 19:17:14,140 INFO [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,140 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2., pid=38, masterSystemTime=1689189434117 2023-07-12 19:17:14,145 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:14,146 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434145"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189434145"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189434145"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189434145"}]},"ts":"1689189434145"} 2023-07-12 19:17:14,146 DEBUG [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/f 2023-07-12 19:17:14,146 DEBUG [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/f 2023-07-12 19:17:14,147 INFO [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 338e8802045d7b2a5da83a95c9f1aff3 columnFamilyName f 2023-07-12 19:17:14,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:14,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:14,152 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:14,152 INFO [StoreOpener-338e8802045d7b2a5da83a95c9f1aff3-1] regionserver.HStore(310): Store=338e8802045d7b2a5da83a95c9f1aff3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:14,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d8b4be039bb66e05e5b7e87e85c454ed, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 19:17:14,153 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:14,153 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189434152"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189434152"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189434152"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189434152"}]},"ts":"1689189434152"} 2023-07-12 19:17:14,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:14,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,158 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=28 2023-07-12 19:17:14,158 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=28, state=SUCCESS; OpenRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,36571,1689189426727 in 187 msec 2023-07-12 19:17:14,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,160 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=32 2023-07-12 19:17:14,160 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, REOPEN/MOVE in 559 msec 2023-07-12 19:17:14,160 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=32, state=SUCCESS; OpenRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,36311,1689189430768 in 196 msec 2023-07-12 19:17:14,162 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, REOPEN/MOVE in 550 msec 2023-07-12 19:17:14,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 338e8802045d7b2a5da83a95c9f1aff3; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11104627200, jitterRate=0.03419899940490723}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:14,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 338e8802045d7b2a5da83a95c9f1aff3: 2023-07-12 19:17:14,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3., pid=36, masterSystemTime=1689189434110 2023-07-12 19:17:14,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:14,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:14,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:14,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08074a1beba6aeec461717c2440138cb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 19:17:14,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:14,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,170 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:14,170 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189434170"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189434170"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189434170"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189434170"}]},"ts":"1689189434170"} 2023-07-12 19:17:14,171 INFO [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,171 INFO [StoreOpener-08074a1beba6aeec461717c2440138cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,172 DEBUG [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/f 2023-07-12 19:17:14,172 DEBUG [StoreOpener-08074a1beba6aeec461717c2440138cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/f 2023-07-12 19:17:14,172 DEBUG [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/f 2023-07-12 19:17:14,172 DEBUG [StoreOpener-08074a1beba6aeec461717c2440138cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/f 2023-07-12 19:17:14,173 INFO [StoreOpener-08074a1beba6aeec461717c2440138cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08074a1beba6aeec461717c2440138cb columnFamilyName f 2023-07-12 19:17:14,173 INFO [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d8b4be039bb66e05e5b7e87e85c454ed columnFamilyName f 2023-07-12 19:17:14,174 INFO [StoreOpener-08074a1beba6aeec461717c2440138cb-1] regionserver.HStore(310): Store=08074a1beba6aeec461717c2440138cb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:14,174 INFO [StoreOpener-d8b4be039bb66e05e5b7e87e85c454ed-1] regionserver.HStore(310): Store=d8b4be039bb66e05e5b7e87e85c454ed/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:14,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,180 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=26 2023-07-12 19:17:14,180 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=26, state=SUCCESS; OpenRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,36571,1689189426727 in 215 msec 2023-07-12 19:17:14,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,183 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, REOPEN/MOVE in 590 msec 2023-07-12 19:17:14,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,185 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d8b4be039bb66e05e5b7e87e85c454ed; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10650108320, jitterRate=-0.008131369948387146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:14,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d8b4be039bb66e05e5b7e87e85c454ed: 2023-07-12 19:17:14,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed., pid=40, masterSystemTime=1689189434117 2023-07-12 19:17:14,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:14,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:14,190 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:14,190 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434190"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189434190"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189434190"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189434190"}]},"ts":"1689189434190"} 2023-07-12 19:17:14,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,194 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 08074a1beba6aeec461717c2440138cb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11581963360, jitterRate=0.07865439355373383}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:14,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 08074a1beba6aeec461717c2440138cb: 2023-07-12 19:17:14,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb., pid=37, masterSystemTime=1689189434110 2023-07-12 19:17:14,206 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=27 2023-07-12 19:17:14,206 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=27, state=SUCCESS; OpenRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,36311,1689189430768 in 226 msec 2023-07-12 19:17:14,207 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:14,207 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434207"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189434207"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189434207"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189434207"}]},"ts":"1689189434207"} 2023-07-12 19:17:14,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:14,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:14,210 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, REOPEN/MOVE in 612 msec 2023-07-12 19:17:14,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-12 19:17:14,216 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; OpenRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,36571,1689189426727 in 251 msec 2023-07-12 19:17:14,244 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, REOPEN/MOVE in 619 msec 2023-07-12 19:17:14,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-12 19:17:14,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_806716229. 2023-07-12 19:17:14,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:14,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:14,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:14,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:14,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:14,631 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:14,642 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:14,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:14,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:14,658 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189434657"}]},"ts":"1689189434657"} 2023-07-12 19:17:14,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 19:17:14,659 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 19:17:14,661 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 19:17:14,663 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, UNASSIGN}] 2023-07-12 19:17:14,665 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, UNASSIGN 2023-07-12 19:17:14,665 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, UNASSIGN 2023-07-12 19:17:14,665 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, UNASSIGN 2023-07-12 19:17:14,666 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, UNASSIGN 2023-07-12 19:17:14,666 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, UNASSIGN 2023-07-12 19:17:14,667 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:14,667 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:14,667 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:14,667 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:14,667 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:14,667 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434667"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189434667"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189434667"}]},"ts":"1689189434667"} 2023-07-12 19:17:14,667 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189434667"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189434667"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189434667"}]},"ts":"1689189434667"} 2023-07-12 19:17:14,667 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434667"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189434667"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189434667"}]},"ts":"1689189434667"} 2023-07-12 19:17:14,667 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434667"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189434667"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189434667"}]},"ts":"1689189434667"} 2023-07-12 19:17:14,667 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189434667"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189434667"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189434667"}]},"ts":"1689189434667"} 2023-07-12 19:17:14,669 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=43, state=RUNNABLE; CloseRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:14,670 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=46, state=RUNNABLE; CloseRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:14,671 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=45, state=RUNNABLE; CloseRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:14,672 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=44, state=RUNNABLE; CloseRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:14,674 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=42, state=RUNNABLE; CloseRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:14,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 19:17:14,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing b0a19d397667f15760caca207e8c44a2, disabling compactions & flushes 2023-07-12 19:17:14,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:14,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:14,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. after waiting 0 ms 2023-07-12 19:17:14,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:14,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 338e8802045d7b2a5da83a95c9f1aff3, disabling compactions & flushes 2023-07-12 19:17:14,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:14,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:14,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. after waiting 0 ms 2023-07-12 19:17:14,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:14,842 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 19:17:14,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:14,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2. 2023-07-12 19:17:14,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for b0a19d397667f15760caca207e8c44a2: 2023-07-12 19:17:14,852 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=b0a19d397667f15760caca207e8c44a2, regionState=CLOSED 2023-07-12 19:17:14,852 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189434852"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189434852"}]},"ts":"1689189434852"} 2023-07-12 19:17:14,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:14,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,860 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=46 2023-07-12 19:17:14,860 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=46, state=SUCCESS; CloseRegionProcedure b0a19d397667f15760caca207e8c44a2, server=jenkins-hbase20.apache.org,36311,1689189430768 in 185 msec 2023-07-12 19:17:14,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d8b4be039bb66e05e5b7e87e85c454ed, disabling compactions & flushes 2023-07-12 19:17:14,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:14,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:14,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. after waiting 0 ms 2023-07-12 19:17:14,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:14,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:14,881 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b0a19d397667f15760caca207e8c44a2, UNASSIGN in 197 msec 2023-07-12 19:17:14,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3. 2023-07-12 19:17:14,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 338e8802045d7b2a5da83a95c9f1aff3: 2023-07-12 19:17:14,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:14,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 08074a1beba6aeec461717c2440138cb, disabling compactions & flushes 2023-07-12 19:17:14,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:14,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:14,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. after waiting 0 ms 2023-07-12 19:17:14,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:14,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:14,898 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed. 2023-07-12 19:17:14,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d8b4be039bb66e05e5b7e87e85c454ed: 2023-07-12 19:17:14,901 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=338e8802045d7b2a5da83a95c9f1aff3, regionState=CLOSED 2023-07-12 19:17:14,901 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189434901"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189434901"}]},"ts":"1689189434901"} 2023-07-12 19:17:14,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:14,902 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=d8b4be039bb66e05e5b7e87e85c454ed, regionState=CLOSED 2023-07-12 19:17:14,902 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434902"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189434902"}]},"ts":"1689189434902"} 2023-07-12 19:17:14,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=42 2023-07-12 19:17:14,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=42, state=SUCCESS; CloseRegionProcedure 338e8802045d7b2a5da83a95c9f1aff3, server=jenkins-hbase20.apache.org,36571,1689189426727 in 230 msec 2023-07-12 19:17:14,907 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=43 2023-07-12 19:17:14,907 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=43, state=SUCCESS; CloseRegionProcedure d8b4be039bb66e05e5b7e87e85c454ed, server=jenkins-hbase20.apache.org,36311,1689189430768 in 235 msec 2023-07-12 19:17:14,910 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=338e8802045d7b2a5da83a95c9f1aff3, UNASSIGN in 243 msec 2023-07-12 19:17:14,910 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d8b4be039bb66e05e5b7e87e85c454ed, UNASSIGN in 244 msec 2023-07-12 19:17:14,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:14,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb. 2023-07-12 19:17:14,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 08074a1beba6aeec461717c2440138cb: 2023-07-12 19:17:14,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:14,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f3e8991941bf8cc6182c695ccc396f36, disabling compactions & flushes 2023-07-12 19:17:14,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. after waiting 0 ms 2023-07-12 19:17:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:14,935 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=08074a1beba6aeec461717c2440138cb, regionState=CLOSED 2023-07-12 19:17:14,935 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434935"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189434935"}]},"ts":"1689189434935"} 2023-07-12 19:17:14,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=45 2023-07-12 19:17:14,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=45, state=SUCCESS; CloseRegionProcedure 08074a1beba6aeec461717c2440138cb, server=jenkins-hbase20.apache.org,36571,1689189426727 in 268 msec 2023-07-12 19:17:14,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:14,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36. 2023-07-12 19:17:14,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f3e8991941bf8cc6182c695ccc396f36: 2023-07-12 19:17:14,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=08074a1beba6aeec461717c2440138cb, UNASSIGN in 279 msec 2023-07-12 19:17:14,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:14,952 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=f3e8991941bf8cc6182c695ccc396f36, regionState=CLOSED 2023-07-12 19:17:14,952 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189434952"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189434952"}]},"ts":"1689189434952"} 2023-07-12 19:17:14,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 19:17:14,963 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=44 2023-07-12 19:17:14,963 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=44, state=SUCCESS; CloseRegionProcedure f3e8991941bf8cc6182c695ccc396f36, server=jenkins-hbase20.apache.org,36571,1689189426727 in 283 msec 2023-07-12 19:17:14,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=41 2023-07-12 19:17:14,971 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e8991941bf8cc6182c695ccc396f36, UNASSIGN in 300 msec 2023-07-12 19:17:14,972 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189434972"}]},"ts":"1689189434972"} 2023-07-12 19:17:14,974 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 19:17:14,975 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 19:17:14,980 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 329 msec 2023-07-12 19:17:14,982 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 19:17:14,982 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 19:17:14,982 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:14,983 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 19:17:14,983 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 19:17:14,983 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 19:17:14,984 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 19:17:14,985 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 19:17:15,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-12 19:17:15,264 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-12 19:17:15,265 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:15,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$6(2260): Client=jenkins//148.251.75.209 truncate Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:15,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-12 19:17:15,283 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-12 19:17:15,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 19:17:15,300 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:15,300 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:15,300 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:15,300 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:15,300 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:15,307 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/recovered.edits] 2023-07-12 19:17:15,307 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/recovered.edits] 2023-07-12 19:17:15,308 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/recovered.edits] 2023-07-12 19:17:15,308 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/recovered.edits] 2023-07-12 19:17:15,315 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/recovered.edits] 2023-07-12 19:17:15,343 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/recovered.edits/7.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb/recovered.edits/7.seqid 2023-07-12 19:17:15,344 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/recovered.edits/7.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2/recovered.edits/7.seqid 2023-07-12 19:17:15,345 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b0a19d397667f15760caca207e8c44a2 2023-07-12 19:17:15,345 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/08074a1beba6aeec461717c2440138cb 2023-07-12 19:17:15,346 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/recovered.edits/7.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36/recovered.edits/7.seqid 2023-07-12 19:17:15,347 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e8991941bf8cc6182c695ccc396f36 2023-07-12 19:17:15,350 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/recovered.edits/7.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed/recovered.edits/7.seqid 2023-07-12 19:17:15,351 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d8b4be039bb66e05e5b7e87e85c454ed 2023-07-12 19:17:15,353 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/recovered.edits/7.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3/recovered.edits/7.seqid 2023-07-12 19:17:15,354 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/338e8802045d7b2a5da83a95c9f1aff3 2023-07-12 19:17:15,354 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 19:17:15,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 19:17:15,389 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 19:17:15,411 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 19:17:15,412 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 19:17:15,412 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189435412"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:15,413 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189435412"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:15,413 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189435412"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:15,413 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189435412"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:15,413 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189435412"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:15,421 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 19:17:15,422 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 338e8802045d7b2a5da83a95c9f1aff3, NAME => 'Group_testTableMoveTruncateAndDrop,,1689189432311.338e8802045d7b2a5da83a95c9f1aff3.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => d8b4be039bb66e05e5b7e87e85c454ed, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689189432311.d8b4be039bb66e05e5b7e87e85c454ed.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f3e8991941bf8cc6182c695ccc396f36, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189432311.f3e8991941bf8cc6182c695ccc396f36.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 08074a1beba6aeec461717c2440138cb, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189432311.08074a1beba6aeec461717c2440138cb.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => b0a19d397667f15760caca207e8c44a2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689189432311.b0a19d397667f15760caca207e8c44a2.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 19:17:15,422 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 19:17:15,422 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189435422"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:15,430 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 19:17:15,440 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:15,440 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:15,440 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:15,440 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:15,440 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:15,441 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0 empty. 2023-07-12 19:17:15,441 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd empty. 2023-07-12 19:17:15,442 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214 empty. 2023-07-12 19:17:15,442 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2 empty. 2023-07-12 19:17:15,442 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:15,442 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33 empty. 2023-07-12 19:17:15,442 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:15,443 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:15,443 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:15,443 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:15,443 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 19:17:15,503 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:15,505 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 500ec7f989dfe7024824e612e29163c0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:15,508 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => a2d9c9d41295083293def807dd1b3abd, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:15,525 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 63aebfefe46c17a5b69f6ee40592df33, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:15,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 19:17:15,603 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:15,603 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 500ec7f989dfe7024824e612e29163c0, disabling compactions & flushes 2023-07-12 19:17:15,603 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:15,603 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:15,603 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. after waiting 0 ms 2023-07-12 19:17:15,603 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:15,603 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:15,604 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 500ec7f989dfe7024824e612e29163c0: 2023-07-12 19:17:15,604 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1d0045c2ebb63d729efb387c54da42d2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:15,644 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:15,644 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing a2d9c9d41295083293def807dd1b3abd, disabling compactions & flushes 2023-07-12 19:17:15,644 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:15,645 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:15,645 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. after waiting 0 ms 2023-07-12 19:17:15,645 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:15,645 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:15,645 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for a2d9c9d41295083293def807dd1b3abd: 2023-07-12 19:17:15,646 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9e3d445b811f09c5dd148ff8da779214, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:15,655 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:15,655 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 63aebfefe46c17a5b69f6ee40592df33, disabling compactions & flushes 2023-07-12 19:17:15,656 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:15,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:15,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. after waiting 0 ms 2023-07-12 19:17:15,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:15,656 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:15,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 63aebfefe46c17a5b69f6ee40592df33: 2023-07-12 19:17:15,687 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:15,687 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 1d0045c2ebb63d729efb387c54da42d2, disabling compactions & flushes 2023-07-12 19:17:15,687 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:15,687 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:15,687 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. after waiting 0 ms 2023-07-12 19:17:15,688 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:15,688 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:15,688 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 1d0045c2ebb63d729efb387c54da42d2: 2023-07-12 19:17:15,718 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:15,718 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 9e3d445b811f09c5dd148ff8da779214, disabling compactions & flushes 2023-07-12 19:17:15,718 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:15,718 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:15,718 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. after waiting 0 ms 2023-07-12 19:17:15,718 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:15,718 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:15,718 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 9e3d445b811f09c5dd148ff8da779214: 2023-07-12 19:17:15,722 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189435722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189435722"}]},"ts":"1689189435722"} 2023-07-12 19:17:15,722 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189435722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189435722"}]},"ts":"1689189435722"} 2023-07-12 19:17:15,723 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189435722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189435722"}]},"ts":"1689189435722"} 2023-07-12 19:17:15,723 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189435722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189435722"}]},"ts":"1689189435722"} 2023-07-12 19:17:15,723 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189435722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189435722"}]},"ts":"1689189435722"} 2023-07-12 19:17:15,726 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 19:17:15,728 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189435728"}]},"ts":"1689189435728"} 2023-07-12 19:17:15,730 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-12 19:17:15,734 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:15,734 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:15,734 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:15,734 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:15,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=500ec7f989dfe7024824e612e29163c0, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a2d9c9d41295083293def807dd1b3abd, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63aebfefe46c17a5b69f6ee40592df33, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d0045c2ebb63d729efb387c54da42d2, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e3d445b811f09c5dd148ff8da779214, ASSIGN}] 2023-07-12 19:17:15,740 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63aebfefe46c17a5b69f6ee40592df33, ASSIGN 2023-07-12 19:17:15,741 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a2d9c9d41295083293def807dd1b3abd, ASSIGN 2023-07-12 19:17:15,741 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=500ec7f989dfe7024824e612e29163c0, ASSIGN 2023-07-12 19:17:15,741 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e3d445b811f09c5dd148ff8da779214, ASSIGN 2023-07-12 19:17:15,741 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d0045c2ebb63d729efb387c54da42d2, ASSIGN 2023-07-12 19:17:15,742 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63aebfefe46c17a5b69f6ee40592df33, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:15,742 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a2d9c9d41295083293def807dd1b3abd, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36311,1689189430768; forceNewPlan=false, retain=false 2023-07-12 19:17:15,742 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e3d445b811f09c5dd148ff8da779214, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:15,742 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d0045c2ebb63d729efb387c54da42d2, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36311,1689189430768; forceNewPlan=false, retain=false 2023-07-12 19:17:15,742 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=500ec7f989dfe7024824e612e29163c0, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:15,892 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 19:17:15,897 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=9e3d445b811f09c5dd148ff8da779214, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:15,897 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=63aebfefe46c17a5b69f6ee40592df33, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:15,897 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189435897"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189435897"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189435897"}]},"ts":"1689189435897"} 2023-07-12 19:17:15,897 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189435897"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189435897"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189435897"}]},"ts":"1689189435897"} 2023-07-12 19:17:15,897 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=500ec7f989dfe7024824e612e29163c0, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:15,897 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=a2d9c9d41295083293def807dd1b3abd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:15,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 19:17:15,897 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=1d0045c2ebb63d729efb387c54da42d2, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:15,898 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189435897"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189435897"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189435897"}]},"ts":"1689189435897"} 2023-07-12 19:17:15,898 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189435897"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189435897"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189435897"}]},"ts":"1689189435897"} 2023-07-12 19:17:15,898 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189435897"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189435897"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189435897"}]},"ts":"1689189435897"} 2023-07-12 19:17:15,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=57, state=RUNNABLE; OpenRegionProcedure 9e3d445b811f09c5dd148ff8da779214, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:15,904 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=55, state=RUNNABLE; OpenRegionProcedure 63aebfefe46c17a5b69f6ee40592df33, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:15,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=54, state=RUNNABLE; OpenRegionProcedure a2d9c9d41295083293def807dd1b3abd, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:15,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=56, state=RUNNABLE; OpenRegionProcedure 1d0045c2ebb63d729efb387c54da42d2, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:15,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=53, state=RUNNABLE; OpenRegionProcedure 500ec7f989dfe7024824e612e29163c0, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:16,060 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:16,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 500ec7f989dfe7024824e612e29163c0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 19:17:16,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:16,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,063 INFO [StoreOpener-500ec7f989dfe7024824e612e29163c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,065 DEBUG [StoreOpener-500ec7f989dfe7024824e612e29163c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/f 2023-07-12 19:17:16,065 DEBUG [StoreOpener-500ec7f989dfe7024824e612e29163c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/f 2023-07-12 19:17:16,065 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:16,066 INFO [StoreOpener-500ec7f989dfe7024824e612e29163c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 500ec7f989dfe7024824e612e29163c0 columnFamilyName f 2023-07-12 19:17:16,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a2d9c9d41295083293def807dd1b3abd, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 19:17:16,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:16,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,067 INFO [StoreOpener-500ec7f989dfe7024824e612e29163c0-1] regionserver.HStore(310): Store=500ec7f989dfe7024824e612e29163c0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:16,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,068 INFO [StoreOpener-a2d9c9d41295083293def807dd1b3abd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,070 DEBUG [StoreOpener-a2d9c9d41295083293def807dd1b3abd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/f 2023-07-12 19:17:16,070 DEBUG [StoreOpener-a2d9c9d41295083293def807dd1b3abd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/f 2023-07-12 19:17:16,071 INFO [StoreOpener-a2d9c9d41295083293def807dd1b3abd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a2d9c9d41295083293def807dd1b3abd columnFamilyName f 2023-07-12 19:17:16,071 INFO [StoreOpener-a2d9c9d41295083293def807dd1b3abd-1] regionserver.HStore(310): Store=a2d9c9d41295083293def807dd1b3abd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:16,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:16,081 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened a2d9c9d41295083293def807dd1b3abd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11106563200, jitterRate=0.03437930345535278}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:16,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for a2d9c9d41295083293def807dd1b3abd: 2023-07-12 19:17:16,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd., pid=60, masterSystemTime=1689189436061 2023-07-12 19:17:16,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:16,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:16,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:16,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1d0045c2ebb63d729efb387c54da42d2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 19:17:16,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:16,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:16,086 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=a2d9c9d41295083293def807dd1b3abd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:16,087 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436086"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189436086"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189436086"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189436086"}]},"ts":"1689189436086"} 2023-07-12 19:17:16,088 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 500ec7f989dfe7024824e612e29163c0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10122133280, jitterRate=-0.05730287730693817}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:16,088 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 500ec7f989dfe7024824e612e29163c0: 2023-07-12 19:17:16,089 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0., pid=62, masterSystemTime=1689189436055 2023-07-12 19:17:16,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:16,091 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:16,091 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:16,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9e3d445b811f09c5dd148ff8da779214, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 19:17:16,092 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=500ec7f989dfe7024824e612e29163c0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:16,092 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189436092"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189436092"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189436092"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189436092"}]},"ts":"1689189436092"} 2023-07-12 19:17:16,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:16,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,093 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=54 2023-07-12 19:17:16,093 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=54, state=SUCCESS; OpenRegionProcedure a2d9c9d41295083293def807dd1b3abd, server=jenkins-hbase20.apache.org,36311,1689189430768 in 185 msec 2023-07-12 19:17:16,097 INFO [StoreOpener-9e3d445b811f09c5dd148ff8da779214-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,099 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a2d9c9d41295083293def807dd1b3abd, ASSIGN in 356 msec 2023-07-12 19:17:16,100 INFO [StoreOpener-1d0045c2ebb63d729efb387c54da42d2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,105 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=53 2023-07-12 19:17:16,105 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=53, state=SUCCESS; OpenRegionProcedure 500ec7f989dfe7024824e612e29163c0, server=jenkins-hbase20.apache.org,36571,1689189426727 in 189 msec 2023-07-12 19:17:16,106 DEBUG [StoreOpener-9e3d445b811f09c5dd148ff8da779214-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/f 2023-07-12 19:17:16,107 DEBUG [StoreOpener-9e3d445b811f09c5dd148ff8da779214-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/f 2023-07-12 19:17:16,108 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=500ec7f989dfe7024824e612e29163c0, ASSIGN in 371 msec 2023-07-12 19:17:16,108 INFO [StoreOpener-9e3d445b811f09c5dd148ff8da779214-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9e3d445b811f09c5dd148ff8da779214 columnFamilyName f 2023-07-12 19:17:16,109 INFO [StoreOpener-9e3d445b811f09c5dd148ff8da779214-1] regionserver.HStore(310): Store=9e3d445b811f09c5dd148ff8da779214/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:16,109 DEBUG [StoreOpener-1d0045c2ebb63d729efb387c54da42d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/f 2023-07-12 19:17:16,109 DEBUG [StoreOpener-1d0045c2ebb63d729efb387c54da42d2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/f 2023-07-12 19:17:16,110 INFO [StoreOpener-1d0045c2ebb63d729efb387c54da42d2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1d0045c2ebb63d729efb387c54da42d2 columnFamilyName f 2023-07-12 19:17:16,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,111 INFO [StoreOpener-1d0045c2ebb63d729efb387c54da42d2-1] regionserver.HStore(310): Store=1d0045c2ebb63d729efb387c54da42d2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:16,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:16,144 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1d0045c2ebb63d729efb387c54da42d2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10247032640, jitterRate=-0.04567071795463562}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:16,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1d0045c2ebb63d729efb387c54da42d2: 2023-07-12 19:17:16,145 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2., pid=61, masterSystemTime=1689189436061 2023-07-12 19:17:16,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:16,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:16,151 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=1d0045c2ebb63d729efb387c54da42d2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:16,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:16,151 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436151"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189436151"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189436151"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189436151"}]},"ts":"1689189436151"} 2023-07-12 19:17:16,152 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 9e3d445b811f09c5dd148ff8da779214; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9519053600, jitterRate=-0.11346904933452606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:16,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 9e3d445b811f09c5dd148ff8da779214: 2023-07-12 19:17:16,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214., pid=58, masterSystemTime=1689189436055 2023-07-12 19:17:16,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:16,158 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:16,158 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:16,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63aebfefe46c17a5b69f6ee40592df33, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 19:17:16,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:16,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,159 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=9e3d445b811f09c5dd148ff8da779214, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:16,159 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189436159"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189436159"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189436159"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189436159"}]},"ts":"1689189436159"} 2023-07-12 19:17:16,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=56 2023-07-12 19:17:16,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=56, state=SUCCESS; OpenRegionProcedure 1d0045c2ebb63d729efb387c54da42d2, server=jenkins-hbase20.apache.org,36311,1689189430768 in 248 msec 2023-07-12 19:17:16,164 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d0045c2ebb63d729efb387c54da42d2, ASSIGN in 424 msec 2023-07-12 19:17:16,165 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=57 2023-07-12 19:17:16,165 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=57, state=SUCCESS; OpenRegionProcedure 9e3d445b811f09c5dd148ff8da779214, server=jenkins-hbase20.apache.org,36571,1689189426727 in 260 msec 2023-07-12 19:17:16,168 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e3d445b811f09c5dd148ff8da779214, ASSIGN in 428 msec 2023-07-12 19:17:16,179 INFO [StoreOpener-63aebfefe46c17a5b69f6ee40592df33-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,182 DEBUG [StoreOpener-63aebfefe46c17a5b69f6ee40592df33-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/f 2023-07-12 19:17:16,182 DEBUG [StoreOpener-63aebfefe46c17a5b69f6ee40592df33-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/f 2023-07-12 19:17:16,183 INFO [StoreOpener-63aebfefe46c17a5b69f6ee40592df33-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63aebfefe46c17a5b69f6ee40592df33 columnFamilyName f 2023-07-12 19:17:16,185 INFO [StoreOpener-63aebfefe46c17a5b69f6ee40592df33-1] regionserver.HStore(310): Store=63aebfefe46c17a5b69f6ee40592df33/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:16,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:16,220 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 63aebfefe46c17a5b69f6ee40592df33; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12007903200, jitterRate=0.11832313239574432}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:16,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 63aebfefe46c17a5b69f6ee40592df33: 2023-07-12 19:17:16,222 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33., pid=59, masterSystemTime=1689189436055 2023-07-12 19:17:16,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:16,230 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:16,232 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=63aebfefe46c17a5b69f6ee40592df33, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:16,232 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436232"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189436232"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189436232"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189436232"}]},"ts":"1689189436232"} 2023-07-12 19:17:16,241 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=55 2023-07-12 19:17:16,241 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; OpenRegionProcedure 63aebfefe46c17a5b69f6ee40592df33, server=jenkins-hbase20.apache.org,36571,1689189426727 in 332 msec 2023-07-12 19:17:16,257 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=52 2023-07-12 19:17:16,258 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63aebfefe46c17a5b69f6ee40592df33, ASSIGN in 504 msec 2023-07-12 19:17:16,258 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189436257"}]},"ts":"1689189436257"} 2023-07-12 19:17:16,260 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-12 19:17:16,262 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-12 19:17:16,268 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 989 msec 2023-07-12 19:17:16,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-12 19:17:16,402 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-12 19:17:16,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:16,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:16,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:16,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:16,406 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 19:17:16,421 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189436421"}]},"ts":"1689189436421"} 2023-07-12 19:17:16,424 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-12 19:17:16,425 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-12 19:17:16,427 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=500ec7f989dfe7024824e612e29163c0, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a2d9c9d41295083293def807dd1b3abd, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63aebfefe46c17a5b69f6ee40592df33, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d0045c2ebb63d729efb387c54da42d2, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e3d445b811f09c5dd148ff8da779214, UNASSIGN}] 2023-07-12 19:17:16,430 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e3d445b811f09c5dd148ff8da779214, UNASSIGN 2023-07-12 19:17:16,431 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a2d9c9d41295083293def807dd1b3abd, UNASSIGN 2023-07-12 19:17:16,438 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d0045c2ebb63d729efb387c54da42d2, UNASSIGN 2023-07-12 19:17:16,439 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63aebfefe46c17a5b69f6ee40592df33, UNASSIGN 2023-07-12 19:17:16,439 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=500ec7f989dfe7024824e612e29163c0, UNASSIGN 2023-07-12 19:17:16,439 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=9e3d445b811f09c5dd148ff8da779214, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:16,440 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189436439"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189436439"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189436439"}]},"ts":"1689189436439"} 2023-07-12 19:17:16,440 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=a2d9c9d41295083293def807dd1b3abd, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:16,440 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=63aebfefe46c17a5b69f6ee40592df33, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:16,440 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436440"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189436440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189436440"}]},"ts":"1689189436440"} 2023-07-12 19:17:16,440 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436440"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189436440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189436440"}]},"ts":"1689189436440"} 2023-07-12 19:17:16,440 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=500ec7f989dfe7024824e612e29163c0, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:16,440 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=1d0045c2ebb63d729efb387c54da42d2, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:16,441 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189436440"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189436440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189436440"}]},"ts":"1689189436440"} 2023-07-12 19:17:16,441 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436440"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189436440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189436440"}]},"ts":"1689189436440"} 2023-07-12 19:17:16,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=68, state=RUNNABLE; CloseRegionProcedure 9e3d445b811f09c5dd148ff8da779214, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:16,447 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=65, state=RUNNABLE; CloseRegionProcedure a2d9c9d41295083293def807dd1b3abd, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:16,449 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=66, state=RUNNABLE; CloseRegionProcedure 63aebfefe46c17a5b69f6ee40592df33, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:16,451 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=64, state=RUNNABLE; CloseRegionProcedure 500ec7f989dfe7024824e612e29163c0, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:16,459 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=67, state=RUNNABLE; CloseRegionProcedure 1d0045c2ebb63d729efb387c54da42d2, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:16,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 19:17:16,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 500ec7f989dfe7024824e612e29163c0, disabling compactions & flushes 2023-07-12 19:17:16,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:16,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:16,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. after waiting 0 ms 2023-07-12 19:17:16,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:16,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing a2d9c9d41295083293def807dd1b3abd, disabling compactions & flushes 2023-07-12 19:17:16,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:16,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:16,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. after waiting 0 ms 2023-07-12 19:17:16,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:16,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:16,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0. 2023-07-12 19:17:16,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 500ec7f989dfe7024824e612e29163c0: 2023-07-12 19:17:16,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 63aebfefe46c17a5b69f6ee40592df33, disabling compactions & flushes 2023-07-12 19:17:16,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:16,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:16,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. after waiting 0 ms 2023-07-12 19:17:16,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:16,616 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=500ec7f989dfe7024824e612e29163c0, regionState=CLOSED 2023-07-12 19:17:16,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:16,616 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189436616"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189436616"}]},"ts":"1689189436616"} 2023-07-12 19:17:16,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd. 2023-07-12 19:17:16,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for a2d9c9d41295083293def807dd1b3abd: 2023-07-12 19:17:16,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1d0045c2ebb63d729efb387c54da42d2, disabling compactions & flushes 2023-07-12 19:17:16,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:16,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:16,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. after waiting 0 ms 2023-07-12 19:17:16,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:16,620 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=a2d9c9d41295083293def807dd1b3abd, regionState=CLOSED 2023-07-12 19:17:16,620 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436620"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189436620"}]},"ts":"1689189436620"} 2023-07-12 19:17:16,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=64 2023-07-12 19:17:16,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=64, state=SUCCESS; CloseRegionProcedure 500ec7f989dfe7024824e612e29163c0, server=jenkins-hbase20.apache.org,36571,1689189426727 in 166 msec 2023-07-12 19:17:16,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:16,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33. 2023-07-12 19:17:16,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 63aebfefe46c17a5b69f6ee40592df33: 2023-07-12 19:17:16,623 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=500ec7f989dfe7024824e612e29163c0, UNASSIGN in 194 msec 2023-07-12 19:17:16,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 9e3d445b811f09c5dd148ff8da779214, disabling compactions & flushes 2023-07-12 19:17:16,625 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=65 2023-07-12 19:17:16,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:16,625 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=65, state=SUCCESS; CloseRegionProcedure a2d9c9d41295083293def807dd1b3abd, server=jenkins-hbase20.apache.org,36311,1689189430768 in 175 msec 2023-07-12 19:17:16,625 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:16,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. after waiting 0 ms 2023-07-12 19:17:16,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:16,626 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=63aebfefe46c17a5b69f6ee40592df33, regionState=CLOSED 2023-07-12 19:17:16,626 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436626"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189436626"}]},"ts":"1689189436626"} 2023-07-12 19:17:16,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:16,627 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2. 2023-07-12 19:17:16,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1d0045c2ebb63d729efb387c54da42d2: 2023-07-12 19:17:16,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,632 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a2d9c9d41295083293def807dd1b3abd, UNASSIGN in 198 msec 2023-07-12 19:17:16,632 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=1d0045c2ebb63d729efb387c54da42d2, regionState=CLOSED 2023-07-12 19:17:16,632 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689189436632"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189436632"}]},"ts":"1689189436632"} 2023-07-12 19:17:16,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=66 2023-07-12 19:17:16,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; CloseRegionProcedure 63aebfefe46c17a5b69f6ee40592df33, server=jenkins-hbase20.apache.org,36571,1689189426727 in 182 msec 2023-07-12 19:17:16,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=67 2023-07-12 19:17:16,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=67, state=SUCCESS; CloseRegionProcedure 1d0045c2ebb63d729efb387c54da42d2, server=jenkins-hbase20.apache.org,36311,1689189430768 in 175 msec 2023-07-12 19:17:16,638 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63aebfefe46c17a5b69f6ee40592df33, UNASSIGN in 208 msec 2023-07-12 19:17:16,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:16,639 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1d0045c2ebb63d729efb387c54da42d2, UNASSIGN in 209 msec 2023-07-12 19:17:16,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214. 2023-07-12 19:17:16,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 9e3d445b811f09c5dd148ff8da779214: 2023-07-12 19:17:16,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,642 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=9e3d445b811f09c5dd148ff8da779214, regionState=CLOSED 2023-07-12 19:17:16,642 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689189436642"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189436642"}]},"ts":"1689189436642"} 2023-07-12 19:17:16,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=68 2023-07-12 19:17:16,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=68, state=SUCCESS; CloseRegionProcedure 9e3d445b811f09c5dd148ff8da779214, server=jenkins-hbase20.apache.org,36571,1689189426727 in 199 msec 2023-07-12 19:17:16,647 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=63 2023-07-12 19:17:16,647 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e3d445b811f09c5dd148ff8da779214, UNASSIGN in 218 msec 2023-07-12 19:17:16,648 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189436648"}]},"ts":"1689189436648"} 2023-07-12 19:17:16,649 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-12 19:17:16,650 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-12 19:17:16,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 244 msec 2023-07-12 19:17:16,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-12 19:17:16,717 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-12 19:17:16,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,734 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_806716229' 2023-07-12 19:17:16,736 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:16,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:16,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:16,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:16,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-12 19:17:16,750 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,750 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,751 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,750 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,750 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,753 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/recovered.edits] 2023-07-12 19:17:16,754 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/recovered.edits] 2023-07-12 19:17:16,754 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/recovered.edits] 2023-07-12 19:17:16,754 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/recovered.edits] 2023-07-12 19:17:16,754 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/recovered.edits] 2023-07-12 19:17:16,770 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0/recovered.edits/4.seqid 2023-07-12 19:17:16,770 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd/recovered.edits/4.seqid 2023-07-12 19:17:16,771 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2/recovered.edits/4.seqid 2023-07-12 19:17:16,771 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/500ec7f989dfe7024824e612e29163c0 2023-07-12 19:17:16,771 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33/recovered.edits/4.seqid 2023-07-12 19:17:16,771 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a2d9c9d41295083293def807dd1b3abd 2023-07-12 19:17:16,772 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1d0045c2ebb63d729efb387c54da42d2 2023-07-12 19:17:16,772 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63aebfefe46c17a5b69f6ee40592df33 2023-07-12 19:17:16,772 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214/recovered.edits/4.seqid 2023-07-12 19:17:16,773 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e3d445b811f09c5dd148ff8da779214 2023-07-12 19:17:16,773 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-12 19:17:16,776 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,782 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-12 19:17:16,785 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-12 19:17:16,787 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,787 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-12 19:17:16,788 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189436787"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:16,788 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189436787"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:16,788 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189436787"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:16,788 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189436787"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:16,788 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189436787"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:16,790 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 19:17:16,790 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 500ec7f989dfe7024824e612e29163c0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689189435357.500ec7f989dfe7024824e612e29163c0.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => a2d9c9d41295083293def807dd1b3abd, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689189435357.a2d9c9d41295083293def807dd1b3abd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 63aebfefe46c17a5b69f6ee40592df33, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689189435357.63aebfefe46c17a5b69f6ee40592df33.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 1d0045c2ebb63d729efb387c54da42d2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689189435357.1d0045c2ebb63d729efb387c54da42d2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9e3d445b811f09c5dd148ff8da779214, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689189435357.9e3d445b811f09c5dd148ff8da779214.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 19:17:16,790 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-12 19:17:16,791 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189436790"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:16,793 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-12 19:17:16,795 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-12 19:17:16,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 69 msec 2023-07-12 19:17:16,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-12 19:17:16,851 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-12 19:17:16,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:16,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:16,858 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36571] ipc.CallRunner(144): callId: 155 service: ClientService methodName: Scan size: 147 connection: 148.251.75.209:53494 deadline: 1689189496857, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=39963 startCode=1689189426501. As of locationSeqNum=6. 2023-07-12 19:17:16,964 DEBUG [hconnection-0x2eb50d9d-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:16,966 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:32954, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:16,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:16,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:16,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:16,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:16,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:16,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup default 2023-07-12 19:17:16,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:16,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:16,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:16,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:16,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_806716229, current retry=0 2023-07-12 19:17:16,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:16,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_806716229 => default 2023-07-12 19:17:16,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:17,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testTableMoveTruncateAndDrop_806716229 2023-07-12 19:17:17,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:17,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:17,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:17,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:17,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:17,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:17,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:17,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:17,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:17,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:17,032 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:17,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:17,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:17,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:17,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:17,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:17,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190637050, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:17,051 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:17,053 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:17,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,055 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:17,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:17,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:17,079 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=497 (was 424) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52922@0x3a975367-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:36311 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-641-acceptor-0@16726dfc-ServerConnector@29e86b8d{HTTP/1.1, (http/1.1)}{0.0.0.0:34593} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:36311-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1949172218_17 at /127.0.0.1:57804 [Receiving block BP-1227025609-148.251.75.209-1689189420190:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1227025609-148.251.75.209-1689189420190:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52922@0x3a975367 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:43233 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1977431292_17 at /127.0.0.1:57866 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52922@0x3a975367-SendThread(127.0.0.1:52922) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1227025609-148.251.75.209-1689189420190:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1949172218_17 at /127.0.0.1:52894 [Receiving block BP-1227025609-148.251.75.209-1689189420190:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1949172218_17 at /127.0.0.1:57742 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1854969844_17 at /127.0.0.1:53020 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:43233 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1949172218_17 at /127.0.0.1:43590 [Receiving block BP-1227025609-148.251.75.209-1689189420190:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-65010154_17 at /127.0.0.1:43620 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:36311Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-640 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1227025609-148.251.75.209-1689189420190:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0-prefix:jenkins-hbase20.apache.org,36311,1689189430768 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-513fb3a5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2051136205-647 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=782 (was 681) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 463) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 171), AvailableMemoryMB=3948 (was 4251) 2023-07-12 19:17:17,094 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=497, OpenFileDescriptor=782, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=169, AvailableMemoryMB=3947 2023-07-12 19:17:17,094 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-12 19:17:17,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:17,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:17,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:17,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:17,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:17,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:17,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:17,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:17,111 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:17,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:17,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:17,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:17,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:17,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:17,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190637129, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:17,130 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:17,131 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:17,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,133 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:17,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:17,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:17,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo* 2023-07-12 19:17:17,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:17,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:37696 deadline: 1689190637135, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 19:17:17,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo@ 2023-07-12 19:17:17,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:17,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:37696 deadline: 1689190637136, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 19:17:17,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup - 2023-07-12 19:17:17,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:17,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 148.251.75.209:37696 deadline: 1689190637138, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-12 19:17:17,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo_123 2023-07-12 19:17:17,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-12 19:17:17,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:17,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:17,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:17,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:17,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:17,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:17,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:17,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup foo_123 2023-07-12 19:17:17,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:17,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:17,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:17,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:17,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:17,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:17,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:17,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:17,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:17,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:17,179 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:17,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:17,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:17,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:17,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:17,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:17,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190637194, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:17,195 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:17,198 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:17,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,200 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:17,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:17,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:17,218 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=500 (was 497) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=782 (was 782), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=467 (was 467), ProcessCount=169 (was 169), AvailableMemoryMB=3946 (was 3947) 2023-07-12 19:17:17,234 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=500, OpenFileDescriptor=782, MaxFileDescriptor=60000, SystemLoadAverage=467, ProcessCount=169, AvailableMemoryMB=3944 2023-07-12 19:17:17,234 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-12 19:17:17,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:17,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:17,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:17,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:17,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:17,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:17,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:17,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:17,254 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:17,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:17,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:17,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:17,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:17,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:17,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190637279, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:17,280 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:17,282 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:17,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,283 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:17,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:17,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:17,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:17,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:17,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup bar 2023-07-12 19:17:17,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 19:17:17,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:17,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:17,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:17,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:17,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:36571] to rsgroup bar 2023-07-12 19:17:17,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:17,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 19:17:17,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:17,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:17,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(238): Moving server region 80f898828c5a9814a93d19dfb7ad9318, which do not belong to RSGroup bar 2023-07-12 19:17:17,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE 2023-07-12 19:17:17,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 19:17:17,312 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE 2023-07-12 19:17:17,313 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:17,313 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189437313"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189437313"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189437313"}]},"ts":"1689189437313"} 2023-07-12 19:17:17,315 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:17,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 80f898828c5a9814a93d19dfb7ad9318, disabling compactions & flushes 2023-07-12 19:17:17,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:17,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:17,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. after waiting 0 ms 2023-07-12 19:17:17,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:17,480 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-12 19:17:17,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:17,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 80f898828c5a9814a93d19dfb7ad9318: 2023-07-12 19:17:17,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 80f898828c5a9814a93d19dfb7ad9318 move to jenkins-hbase20.apache.org,43021,1689189426641 record at close sequenceid=10 2023-07-12 19:17:17,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,486 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=CLOSED 2023-07-12 19:17:17,486 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189437486"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189437486"}]},"ts":"1689189437486"} 2023-07-12 19:17:17,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-12 19:17:17,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,39963,1689189426501 in 174 msec 2023-07-12 19:17:17,492 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:17,642 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:17,643 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189437642"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189437642"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189437642"}]},"ts":"1689189437642"} 2023-07-12 19:17:17,647 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:17,804 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:17,804 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 80f898828c5a9814a93d19dfb7ad9318, NAME => 'hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:17,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:17,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,807 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,808 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info 2023-07-12 19:17:17,808 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info 2023-07-12 19:17:17,809 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 80f898828c5a9814a93d19dfb7ad9318 columnFamilyName info 2023-07-12 19:17:17,821 DEBUG [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/info/d7aac827e8d447f9b1eef9d5182ff487 2023-07-12 19:17:17,821 INFO [StoreOpener-80f898828c5a9814a93d19dfb7ad9318-1] regionserver.HStore(310): Store=80f898828c5a9814a93d19dfb7ad9318/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:17,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:17,835 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 80f898828c5a9814a93d19dfb7ad9318; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9790462240, jitterRate=-0.08819214999675751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:17,836 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 80f898828c5a9814a93d19dfb7ad9318: 2023-07-12 19:17:17,837 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318., pid=77, masterSystemTime=1689189437799 2023-07-12 19:17:17,843 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:17,844 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:17,844 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=80f898828c5a9814a93d19dfb7ad9318, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:17,845 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189437844"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189437844"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189437844"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189437844"}]},"ts":"1689189437844"} 2023-07-12 19:17:17,849 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-12 19:17:17,849 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure 80f898828c5a9814a93d19dfb7ad9318, server=jenkins-hbase20.apache.org,43021,1689189426641 in 202 msec 2023-07-12 19:17:17,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=80f898828c5a9814a93d19dfb7ad9318, REOPEN/MOVE in 539 msec 2023-07-12 19:17:18,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-12 19:17:18,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727, jenkins-hbase20.apache.org,39963,1689189426501] are moved back to default 2023-07-12 19:17:18,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-12 19:17:18,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:18,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:18,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:18,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-12 19:17:18,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:18,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:18,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:18,326 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:18,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 78 2023-07-12 19:17:18,329 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:18,330 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 19:17:18,330 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:18,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 19:17:18,331 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:18,333 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:18,335 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,336 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 empty. 2023-07-12 19:17:18,336 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,336 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 19:17:18,355 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:18,357 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5d7c987e33e5a2c9ffcf3edd5c64d208, NAME => 'Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:18,379 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:18,379 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 5d7c987e33e5a2c9ffcf3edd5c64d208, disabling compactions & flushes 2023-07-12 19:17:18,379 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:18,379 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:18,379 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. after waiting 0 ms 2023-07-12 19:17:18,379 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:18,379 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:18,379 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 5d7c987e33e5a2c9ffcf3edd5c64d208: 2023-07-12 19:17:18,382 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:18,383 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189438382"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189438382"}]},"ts":"1689189438382"} 2023-07-12 19:17:18,384 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:18,385 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:18,385 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189438385"}]},"ts":"1689189438385"} 2023-07-12 19:17:18,386 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-12 19:17:18,389 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, ASSIGN}] 2023-07-12 19:17:18,391 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, ASSIGN 2023-07-12 19:17:18,392 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:18,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 19:17:18,544 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:18,544 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189438544"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189438544"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189438544"}]},"ts":"1689189438544"} 2023-07-12 19:17:18,546 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; OpenRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:18,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 19:17:18,709 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:18,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d7c987e33e5a2c9ffcf3edd5c64d208, NAME => 'Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:18,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:18,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,714 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,716 DEBUG [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/f 2023-07-12 19:17:18,716 DEBUG [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/f 2023-07-12 19:17:18,717 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d7c987e33e5a2c9ffcf3edd5c64d208 columnFamilyName f 2023-07-12 19:17:18,718 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] regionserver.HStore(310): Store=5d7c987e33e5a2c9ffcf3edd5c64d208/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:18,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:18,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:18,727 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5d7c987e33e5a2c9ffcf3edd5c64d208; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11621610560, jitterRate=0.0823468267917633}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:18,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5d7c987e33e5a2c9ffcf3edd5c64d208: 2023-07-12 19:17:18,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208., pid=80, masterSystemTime=1689189438699 2023-07-12 19:17:18,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:18,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:18,731 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:18,731 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189438731"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189438731"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189438731"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189438731"}]},"ts":"1689189438731"} 2023-07-12 19:17:18,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-12 19:17:18,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; OpenRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641 in 187 msec 2023-07-12 19:17:18,743 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-12 19:17:18,743 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, ASSIGN in 351 msec 2023-07-12 19:17:18,745 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:18,745 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189438745"}]},"ts":"1689189438745"} 2023-07-12 19:17:18,747 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-12 19:17:18,860 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:18,862 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 537 msec 2023-07-12 19:17:18,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-12 19:17:18,937 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 78 completed 2023-07-12 19:17:18,937 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-12 19:17:18,938 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:18,954 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-12 19:17:18,955 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:18,955 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-12 19:17:18,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-12 19:17:18,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:18,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 19:17:18,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:18,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:18,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-12 19:17:18,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 5d7c987e33e5a2c9ffcf3edd5c64d208 to RSGroup bar 2023-07-12 19:17:18,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:18,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:18,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:18,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:18,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 19:17:18,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:18,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE 2023-07-12 19:17:18,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-12 19:17:18,969 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE 2023-07-12 19:17:18,970 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:18,970 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189438970"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189438970"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189438970"}]},"ts":"1689189438970"} 2023-07-12 19:17:18,972 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:19,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5d7c987e33e5a2c9ffcf3edd5c64d208, disabling compactions & flushes 2023-07-12 19:17:19,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:19,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:19,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. after waiting 0 ms 2023-07-12 19:17:19,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:19,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:19,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:19,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5d7c987e33e5a2c9ffcf3edd5c64d208: 2023-07-12 19:17:19,141 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 5d7c987e33e5a2c9ffcf3edd5c64d208 move to jenkins-hbase20.apache.org,36311,1689189430768 record at close sequenceid=2 2023-07-12 19:17:19,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,143 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=CLOSED 2023-07-12 19:17:19,143 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189439143"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189439143"}]},"ts":"1689189439143"} 2023-07-12 19:17:19,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-12 19:17:19,148 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641 in 173 msec 2023-07-12 19:17:19,149 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36311,1689189430768; forceNewPlan=false, retain=false 2023-07-12 19:17:19,299 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:19,300 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:19,300 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189439300"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189439300"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189439300"}]},"ts":"1689189439300"} 2023-07-12 19:17:19,302 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:19,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:19,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d7c987e33e5a2c9ffcf3edd5c64d208, NAME => 'Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:19,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:19,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,461 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,463 DEBUG [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/f 2023-07-12 19:17:19,463 DEBUG [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/f 2023-07-12 19:17:19,464 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d7c987e33e5a2c9ffcf3edd5c64d208 columnFamilyName f 2023-07-12 19:17:19,464 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] regionserver.HStore(310): Store=5d7c987e33e5a2c9ffcf3edd5c64d208/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:19,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:19,474 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5d7c987e33e5a2c9ffcf3edd5c64d208; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11043625120, jitterRate=0.028517737984657288}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:19,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5d7c987e33e5a2c9ffcf3edd5c64d208: 2023-07-12 19:17:19,474 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208., pid=83, masterSystemTime=1689189439455 2023-07-12 19:17:19,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:19,476 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:19,476 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:19,476 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189439476"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189439476"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189439476"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189439476"}]},"ts":"1689189439476"} 2023-07-12 19:17:19,479 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-12 19:17:19,479 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,36311,1689189430768 in 176 msec 2023-07-12 19:17:19,480 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE in 512 msec 2023-07-12 19:17:19,618 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 19:17:19,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-12 19:17:19,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-12 19:17:19,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:19,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:19,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:19,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-12 19:17:19,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:19,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-12 19:17:19,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:19,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:37696 deadline: 1689190639978, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-12 19:17:19,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:36571] to rsgroup default 2023-07-12 19:17:19,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:19,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 191 connection: 148.251.75.209:37696 deadline: 1689190639979, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-12 19:17:19,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-12 19:17:19,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:19,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 19:17:19,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:19,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:19,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-12 19:17:19,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 5d7c987e33e5a2c9ffcf3edd5c64d208 to RSGroup default 2023-07-12 19:17:19,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE 2023-07-12 19:17:19,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 19:17:19,989 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE 2023-07-12 19:17:19,989 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:19,989 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189439989"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189439989"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189439989"}]},"ts":"1689189439989"} 2023-07-12 19:17:19,991 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:20,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5d7c987e33e5a2c9ffcf3edd5c64d208, disabling compactions & flushes 2023-07-12 19:17:20,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:20,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:20,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. after waiting 0 ms 2023-07-12 19:17:20,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:20,149 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:20,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:20,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5d7c987e33e5a2c9ffcf3edd5c64d208: 2023-07-12 19:17:20,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 5d7c987e33e5a2c9ffcf3edd5c64d208 move to jenkins-hbase20.apache.org,43021,1689189426641 record at close sequenceid=5 2023-07-12 19:17:20,155 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,155 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=CLOSED 2023-07-12 19:17:20,155 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189440155"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189440155"}]},"ts":"1689189440155"} 2023-07-12 19:17:20,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-12 19:17:20,159 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,36311,1689189430768 in 166 msec 2023-07-12 19:17:20,160 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:20,310 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:20,311 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189440310"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189440310"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189440310"}]},"ts":"1689189440310"} 2023-07-12 19:17:20,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:20,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:20,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d7c987e33e5a2c9ffcf3edd5c64d208, NAME => 'Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:20,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:20,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,471 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,472 DEBUG [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/f 2023-07-12 19:17:20,472 DEBUG [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/f 2023-07-12 19:17:20,472 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d7c987e33e5a2c9ffcf3edd5c64d208 columnFamilyName f 2023-07-12 19:17:20,473 INFO [StoreOpener-5d7c987e33e5a2c9ffcf3edd5c64d208-1] regionserver.HStore(310): Store=5d7c987e33e5a2c9ffcf3edd5c64d208/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:20,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:20,479 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5d7c987e33e5a2c9ffcf3edd5c64d208; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11123041920, jitterRate=0.03591400384902954}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:20,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5d7c987e33e5a2c9ffcf3edd5c64d208: 2023-07-12 19:17:20,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208., pid=86, masterSystemTime=1689189440464 2023-07-12 19:17:20,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:20,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:20,482 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:20,482 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189440482"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189440482"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189440482"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189440482"}]},"ts":"1689189440482"} 2023-07-12 19:17:20,485 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-12 19:17:20,485 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641 in 171 msec 2023-07-12 19:17:20,486 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, REOPEN/MOVE in 498 msec 2023-07-12 19:17:20,986 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-12 19:17:20,986 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 19:17:20,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-12 19:17:20,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-12 19:17:20,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:20,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:20,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:20,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-12 19:17:20,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:20,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:37696 deadline: 1689190640996, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-12 19:17:20,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:36571] to rsgroup default 2023-07-12 19:17:21,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-12 19:17:21,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:21,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-12 19:17:21,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727, jenkins-hbase20.apache.org,39963,1689189426501] are moved back to bar 2023-07-12 19:17:21,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-12 19:17:21,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:21,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-12 19:17:21,021 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39963] ipc.CallRunner(144): callId: 210 service: ClientService methodName: Scan size: 147 connection: 148.251.75.209:32954 deadline: 1689189501020, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=43021 startCode=1689189426641. As of locationSeqNum=10. 2023-07-12 19:17:21,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:21,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:21,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,141 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-12 19:17:21,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testFailRemoveGroup 2023-07-12 19:17:21,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:21,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 19:17:21,146 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189441146"}]},"ts":"1689189441146"} 2023-07-12 19:17:21,147 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-12 19:17:21,149 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-12 19:17:21,149 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, UNASSIGN}] 2023-07-12 19:17:21,151 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, UNASSIGN 2023-07-12 19:17:21,152 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:21,152 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189441152"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189441152"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189441152"}]},"ts":"1689189441152"} 2023-07-12 19:17:21,155 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; CloseRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:21,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 19:17:21,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:21,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5d7c987e33e5a2c9ffcf3edd5c64d208, disabling compactions & flushes 2023-07-12 19:17:21,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:21,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:21,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. after waiting 0 ms 2023-07-12 19:17:21,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:21,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 19:17:21,315 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208. 2023-07-12 19:17:21,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5d7c987e33e5a2c9ffcf3edd5c64d208: 2023-07-12 19:17:21,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:21,316 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=5d7c987e33e5a2c9ffcf3edd5c64d208, regionState=CLOSED 2023-07-12 19:17:21,317 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689189441316"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189441316"}]},"ts":"1689189441316"} 2023-07-12 19:17:21,320 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-12 19:17:21,320 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; CloseRegionProcedure 5d7c987e33e5a2c9ffcf3edd5c64d208, server=jenkins-hbase20.apache.org,43021,1689189426641 in 165 msec 2023-07-12 19:17:21,321 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-12 19:17:21,321 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5d7c987e33e5a2c9ffcf3edd5c64d208, UNASSIGN in 171 msec 2023-07-12 19:17:21,322 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189441322"}]},"ts":"1689189441322"} 2023-07-12 19:17:21,323 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-12 19:17:21,325 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-12 19:17:21,327 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 185 msec 2023-07-12 19:17:21,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-12 19:17:21,448 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-12 19:17:21,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testFailRemoveGroup 2023-07-12 19:17:21,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:21,452 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:21,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-12 19:17:21,452 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=90, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:21,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:21,456 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:21,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-12 19:17:21,458 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/recovered.edits] 2023-07-12 19:17:21,463 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/recovered.edits/10.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208/recovered.edits/10.seqid 2023-07-12 19:17:21,464 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testFailRemoveGroup/5d7c987e33e5a2c9ffcf3edd5c64d208 2023-07-12 19:17:21,464 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-12 19:17:21,466 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=90, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:21,469 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-12 19:17:21,471 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-12 19:17:21,473 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=90, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:21,473 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-12 19:17:21,473 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189441473"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:21,475 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 19:17:21,475 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5d7c987e33e5a2c9ffcf3edd5c64d208, NAME => 'Group_testFailRemoveGroup,,1689189438323.5d7c987e33e5a2c9ffcf3edd5c64d208.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 19:17:21,475 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-12 19:17:21,475 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189441475"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:21,477 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-12 19:17:21,478 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=90, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-12 19:17:21,479 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 29 msec 2023-07-12 19:17:21,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-12 19:17:21,558 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-12 19:17:21,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:21,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:21,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:21,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:21,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:21,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:21,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:21,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:21,574 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:21,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:21,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:21,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:21,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:21,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:21,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190641597, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:21,598 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:21,600 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:21,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,601 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:21,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:21,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:21,621 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=505 (was 500) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-534539601_17 at /127.0.0.1:43752 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7e8a142d-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1134464687_17 at /127.0.0.1:34932 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-534539601_17 at /127.0.0.1:53020 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1134464687_17 at /127.0.0.1:57918 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=782 (was 782), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=462 (was 467), ProcessCount=169 (was 169), AvailableMemoryMB=3791 (was 3944) 2023-07-12 19:17:21,621 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 19:17:21,640 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=505, OpenFileDescriptor=782, MaxFileDescriptor=60000, SystemLoadAverage=462, ProcessCount=169, AvailableMemoryMB=3790 2023-07-12 19:17:21,640 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-12 19:17:21,640 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-12 19:17:21,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:21,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:21,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:21,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:21,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:21,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:21,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:21,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:21,656 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:21,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:21,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:21,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:21,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:21,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:21,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190641686, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:21,687 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:21,692 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:21,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,694 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:21,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:21,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:21,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:21,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:21,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testMultiTableMove_140681636 2023-07-12 19:17:21,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:21,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:21,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:21,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311] to rsgroup Group_testMultiTableMove_140681636 2023-07-12 19:17:21,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:21,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:21,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 19:17:21,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768] are moved back to default 2023-07-12 19:17:21,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_140681636 2023-07-12 19:17:21,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:21,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:21,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:21,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_140681636 2023-07-12 19:17:21,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:21,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:21,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:21,733 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:21,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 91 2023-07-12 19:17:21,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 19:17:21,735 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:21,736 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:21,736 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:21,736 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:21,738 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:21,740 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:21,741 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d empty. 2023-07-12 19:17:21,741 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:21,741 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 19:17:21,758 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:21,759 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 61fb0b57110ece6c2acd9d38f7a4a27d, NAME => 'GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:21,770 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:21,770 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 61fb0b57110ece6c2acd9d38f7a4a27d, disabling compactions & flushes 2023-07-12 19:17:21,770 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:21,770 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:21,770 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. after waiting 0 ms 2023-07-12 19:17:21,770 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:21,770 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:21,770 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 61fb0b57110ece6c2acd9d38f7a4a27d: 2023-07-12 19:17:21,772 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:21,773 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189441773"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189441773"}]},"ts":"1689189441773"} 2023-07-12 19:17:21,774 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:21,775 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:21,776 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189441775"}]},"ts":"1689189441775"} 2023-07-12 19:17:21,777 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-12 19:17:21,781 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:21,781 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:21,781 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:21,781 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:21,781 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:21,781 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, ASSIGN}] 2023-07-12 19:17:21,784 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, ASSIGN 2023-07-12 19:17:21,786 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:21,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 19:17:21,936 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:21,939 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:21,939 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189441939"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189441939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189441939"}]},"ts":"1689189441939"} 2023-07-12 19:17:21,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:22,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 19:17:22,101 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:22,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61fb0b57110ece6c2acd9d38f7a4a27d, NAME => 'GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:22,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:22,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:22,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:22,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:22,104 INFO [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:22,106 DEBUG [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/f 2023-07-12 19:17:22,106 DEBUG [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/f 2023-07-12 19:17:22,107 INFO [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61fb0b57110ece6c2acd9d38f7a4a27d columnFamilyName f 2023-07-12 19:17:22,108 INFO [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] regionserver.HStore(310): Store=61fb0b57110ece6c2acd9d38f7a4a27d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:22,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:22,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:22,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:22,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:22,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 61fb0b57110ece6c2acd9d38f7a4a27d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9651742240, jitterRate=-0.10111145675182343}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:22,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 61fb0b57110ece6c2acd9d38f7a4a27d: 2023-07-12 19:17:22,120 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d., pid=93, masterSystemTime=1689189442096 2023-07-12 19:17:22,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:22,122 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:22,122 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:22,122 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189442122"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189442122"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189442122"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189442122"}]},"ts":"1689189442122"} 2023-07-12 19:17:22,125 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-12 19:17:22,125 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,43021,1689189426641 in 182 msec 2023-07-12 19:17:22,127 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-12 19:17:22,127 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, ASSIGN in 344 msec 2023-07-12 19:17:22,127 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:22,128 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189442128"}]},"ts":"1689189442128"} 2023-07-12 19:17:22,129 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-12 19:17:22,131 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:22,132 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 401 msec 2023-07-12 19:17:22,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-12 19:17:22,341 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 91 completed 2023-07-12 19:17:22,341 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-12 19:17:22,341 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:22,345 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-12 19:17:22,345 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:22,345 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-12 19:17:22,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:22,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:22,350 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:22,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 94 2023-07-12 19:17:22,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 19:17:22,353 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:22,354 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:22,354 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:22,355 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:22,357 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:22,358 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,359 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e empty. 2023-07-12 19:17:22,359 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,359 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 19:17:22,376 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:22,377 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8481ed207217456bd2f0345c097edd8e, NAME => 'GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:22,395 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:22,395 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 8481ed207217456bd2f0345c097edd8e, disabling compactions & flushes 2023-07-12 19:17:22,395 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:22,395 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:22,395 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. after waiting 0 ms 2023-07-12 19:17:22,395 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:22,395 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:22,395 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 8481ed207217456bd2f0345c097edd8e: 2023-07-12 19:17:22,397 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:22,398 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189442398"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189442398"}]},"ts":"1689189442398"} 2023-07-12 19:17:22,400 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:22,400 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:22,401 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189442400"}]},"ts":"1689189442400"} 2023-07-12 19:17:22,402 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-12 19:17:22,404 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:22,404 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:22,404 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:22,404 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:22,404 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:22,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, ASSIGN}] 2023-07-12 19:17:22,406 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, ASSIGN 2023-07-12 19:17:22,407 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:22,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 19:17:22,558 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:22,560 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:22,561 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189442560"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189442560"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189442560"}]},"ts":"1689189442560"} 2023-07-12 19:17:22,564 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:22,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 19:17:22,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:22,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8481ed207217456bd2f0345c097edd8e, NAME => 'GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:22,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:22,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,725 INFO [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,727 DEBUG [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/f 2023-07-12 19:17:22,727 DEBUG [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/f 2023-07-12 19:17:22,728 INFO [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8481ed207217456bd2f0345c097edd8e columnFamilyName f 2023-07-12 19:17:22,730 INFO [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] regionserver.HStore(310): Store=8481ed207217456bd2f0345c097edd8e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:22,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:22,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:22,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8481ed207217456bd2f0345c097edd8e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10399516160, jitterRate=-0.03146958351135254}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:22,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8481ed207217456bd2f0345c097edd8e: 2023-07-12 19:17:22,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e., pid=96, masterSystemTime=1689189442716 2023-07-12 19:17:22,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:22,748 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:22,750 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:22,750 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189442750"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189442750"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189442750"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189442750"}]},"ts":"1689189442750"} 2023-07-12 19:17:22,754 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-12 19:17:22,754 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36571,1689189426727 in 188 msec 2023-07-12 19:17:22,757 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-12 19:17:22,757 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, ASSIGN in 350 msec 2023-07-12 19:17:22,758 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:22,758 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189442758"}]},"ts":"1689189442758"} 2023-07-12 19:17:22,760 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-12 19:17:22,766 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:22,770 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 419 msec 2023-07-12 19:17:22,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-12 19:17:22,955 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 94 completed 2023-07-12 19:17:22,955 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-12 19:17:22,955 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:22,959 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-12 19:17:22,959 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:22,959 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-12 19:17:22,959 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:22,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 19:17:22,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:22,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 19:17:22,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:22,972 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_140681636 2023-07-12 19:17:22,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_140681636 2023-07-12 19:17:22,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:22,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:22,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:22,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:22,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_140681636 2023-07-12 19:17:22,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 8481ed207217456bd2f0345c097edd8e to RSGroup Group_testMultiTableMove_140681636 2023-07-12 19:17:22,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, REOPEN/MOVE 2023-07-12 19:17:22,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_140681636 2023-07-12 19:17:22,983 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, REOPEN/MOVE 2023-07-12 19:17:22,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 61fb0b57110ece6c2acd9d38f7a4a27d to RSGroup Group_testMultiTableMove_140681636 2023-07-12 19:17:22,984 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:22,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, REOPEN/MOVE 2023-07-12 19:17:22,984 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189442984"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189442984"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189442984"}]},"ts":"1689189442984"} 2023-07-12 19:17:22,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_140681636, current retry=0 2023-07-12 19:17:22,985 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, REOPEN/MOVE 2023-07-12 19:17:22,986 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:22,986 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189442986"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189442986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189442986"}]},"ts":"1689189442986"} 2023-07-12 19:17:22,986 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=97, state=RUNNABLE; CloseRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:22,988 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=98, state=RUNNABLE; CloseRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:23,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8481ed207217456bd2f0345c097edd8e, disabling compactions & flushes 2023-07-12 19:17:23,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:23,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:23,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. after waiting 0 ms 2023-07-12 19:17:23,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:23,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 61fb0b57110ece6c2acd9d38f7a4a27d, disabling compactions & flushes 2023-07-12 19:17:23,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:23,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:23,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. after waiting 0 ms 2023-07-12 19:17:23,146 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:23,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:23,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:23,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:23,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8481ed207217456bd2f0345c097edd8e: 2023-07-12 19:17:23,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 8481ed207217456bd2f0345c097edd8e move to jenkins-hbase20.apache.org,36311,1689189430768 record at close sequenceid=2 2023-07-12 19:17:23,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:23,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 61fb0b57110ece6c2acd9d38f7a4a27d: 2023-07-12 19:17:23,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 61fb0b57110ece6c2acd9d38f7a4a27d move to jenkins-hbase20.apache.org,36311,1689189430768 record at close sequenceid=2 2023-07-12 19:17:23,153 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,154 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=CLOSED 2023-07-12 19:17:23,154 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189443154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189443154"}]},"ts":"1689189443154"} 2023-07-12 19:17:23,154 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,155 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=CLOSED 2023-07-12 19:17:23,155 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189443154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189443154"}]},"ts":"1689189443154"} 2023-07-12 19:17:23,158 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=97 2023-07-12 19:17:23,158 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=97, state=SUCCESS; CloseRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36571,1689189426727 in 170 msec 2023-07-12 19:17:23,159 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36311,1689189430768; forceNewPlan=false, retain=false 2023-07-12 19:17:23,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=98 2023-07-12 19:17:23,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=98, state=SUCCESS; CloseRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,43021,1689189426641 in 169 msec 2023-07-12 19:17:23,159 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36311,1689189430768; forceNewPlan=false, retain=false 2023-07-12 19:17:23,309 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:23,309 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:23,310 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189443309"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189443309"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189443309"}]},"ts":"1689189443309"} 2023-07-12 19:17:23,310 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189443309"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189443309"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189443309"}]},"ts":"1689189443309"} 2023-07-12 19:17:23,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=97, state=RUNNABLE; OpenRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:23,313 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=98, state=RUNNABLE; OpenRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:23,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:23,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8481ed207217456bd2f0345c097edd8e, NAME => 'GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:23,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:23,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,470 INFO [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,471 DEBUG [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/f 2023-07-12 19:17:23,471 DEBUG [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/f 2023-07-12 19:17:23,472 INFO [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8481ed207217456bd2f0345c097edd8e columnFamilyName f 2023-07-12 19:17:23,473 INFO [StoreOpener-8481ed207217456bd2f0345c097edd8e-1] regionserver.HStore(310): Store=8481ed207217456bd2f0345c097edd8e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:23,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:23,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8481ed207217456bd2f0345c097edd8e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12048932800, jitterRate=0.12214431166648865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:23,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8481ed207217456bd2f0345c097edd8e: 2023-07-12 19:17:23,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e., pid=101, masterSystemTime=1689189443464 2023-07-12 19:17:23,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:23,482 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:23,482 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61fb0b57110ece6c2acd9d38f7a4a27d, NAME => 'GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:23,483 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:23,483 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189443483"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189443483"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189443483"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189443483"}]},"ts":"1689189443483"} 2023-07-12 19:17:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,485 INFO [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,486 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=97 2023-07-12 19:17:23,486 DEBUG [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/f 2023-07-12 19:17:23,486 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=97, state=SUCCESS; OpenRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36311,1689189430768 in 173 msec 2023-07-12 19:17:23,487 DEBUG [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/f 2023-07-12 19:17:23,487 INFO [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61fb0b57110ece6c2acd9d38f7a4a27d columnFamilyName f 2023-07-12 19:17:23,487 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, REOPEN/MOVE in 505 msec 2023-07-12 19:17:23,488 INFO [StoreOpener-61fb0b57110ece6c2acd9d38f7a4a27d-1] regionserver.HStore(310): Store=61fb0b57110ece6c2acd9d38f7a4a27d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:23,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:23,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 61fb0b57110ece6c2acd9d38f7a4a27d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10910134080, jitterRate=0.016085416078567505}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:23,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 61fb0b57110ece6c2acd9d38f7a4a27d: 2023-07-12 19:17:23,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d., pid=102, masterSystemTime=1689189443464 2023-07-12 19:17:23,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:23,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:23,496 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:23,496 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189443496"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189443496"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189443496"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189443496"}]},"ts":"1689189443496"} 2023-07-12 19:17:23,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=98 2023-07-12 19:17:23,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=98, state=SUCCESS; OpenRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,36311,1689189430768 in 185 msec 2023-07-12 19:17:23,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, REOPEN/MOVE in 517 msec 2023-07-12 19:17:23,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=97 2023-07-12 19:17:23,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_140681636. 2023-07-12 19:17:23,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:23,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:23,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:23,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-12 19:17:23,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:23,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-12 19:17:23,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:23,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:23,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:23,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_140681636 2023-07-12 19:17:23,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:23,995 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-12 19:17:23,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveA 2023-07-12 19:17:23,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:23,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 19:17:23,999 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189443999"}]},"ts":"1689189443999"} 2023-07-12 19:17:24,000 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-12 19:17:24,001 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-12 19:17:24,002 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, UNASSIGN}] 2023-07-12 19:17:24,004 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, UNASSIGN 2023-07-12 19:17:24,005 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:24,005 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189444005"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189444005"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189444005"}]},"ts":"1689189444005"} 2023-07-12 19:17:24,006 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:24,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 19:17:24,159 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:24,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 61fb0b57110ece6c2acd9d38f7a4a27d, disabling compactions & flushes 2023-07-12 19:17:24,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:24,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:24,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. after waiting 0 ms 2023-07-12 19:17:24,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:24,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:24,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d. 2023-07-12 19:17:24,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 61fb0b57110ece6c2acd9d38f7a4a27d: 2023-07-12 19:17:24,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:24,170 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=61fb0b57110ece6c2acd9d38f7a4a27d, regionState=CLOSED 2023-07-12 19:17:24,170 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189444170"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189444170"}]},"ts":"1689189444170"} 2023-07-12 19:17:24,172 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-12 19:17:24,173 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure 61fb0b57110ece6c2acd9d38f7a4a27d, server=jenkins-hbase20.apache.org,36311,1689189430768 in 165 msec 2023-07-12 19:17:24,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-12 19:17:24,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=61fb0b57110ece6c2acd9d38f7a4a27d, UNASSIGN in 171 msec 2023-07-12 19:17:24,175 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189444174"}]},"ts":"1689189444174"} 2023-07-12 19:17:24,183 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-12 19:17:24,184 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-12 19:17:24,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 189 msec 2023-07-12 19:17:24,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-12 19:17:24,303 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-12 19:17:24,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveA 2023-07-12 19:17:24,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:24,309 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:24,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_140681636' 2023-07-12 19:17:24,310 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:24,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:24,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:24,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:24,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:24,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 19:17:24,318 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:24,320 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/recovered.edits] 2023-07-12 19:17:24,325 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/recovered.edits/7.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d/recovered.edits/7.seqid 2023-07-12 19:17:24,326 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveA/61fb0b57110ece6c2acd9d38f7a4a27d 2023-07-12 19:17:24,326 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-12 19:17:24,328 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:24,330 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-12 19:17:24,332 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-12 19:17:24,334 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:24,334 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-12 19:17:24,334 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189444334"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:24,336 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 19:17:24,336 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 61fb0b57110ece6c2acd9d38f7a4a27d, NAME => 'GrouptestMultiTableMoveA,,1689189441730.61fb0b57110ece6c2acd9d38f7a4a27d.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 19:17:24,336 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-12 19:17:24,337 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189444337"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:24,338 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-12 19:17:24,340 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-12 19:17:24,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 35 msec 2023-07-12 19:17:24,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-12 19:17:24,419 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-12 19:17:24,419 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-12 19:17:24,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveB 2023-07-12 19:17:24,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:24,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 19:17:24,426 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189444426"}]},"ts":"1689189444426"} 2023-07-12 19:17:24,428 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-12 19:17:24,429 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-12 19:17:24,432 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, UNASSIGN}] 2023-07-12 19:17:24,435 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, UNASSIGN 2023-07-12 19:17:24,436 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:24,436 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189444436"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189444436"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189444436"}]},"ts":"1689189444436"} 2023-07-12 19:17:24,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36311,1689189430768}] 2023-07-12 19:17:24,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 19:17:24,589 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:24,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8481ed207217456bd2f0345c097edd8e, disabling compactions & flushes 2023-07-12 19:17:24,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:24,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:24,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. after waiting 0 ms 2023-07-12 19:17:24,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:24,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:24,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e. 2023-07-12 19:17:24,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8481ed207217456bd2f0345c097edd8e: 2023-07-12 19:17:24,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:24,599 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=8481ed207217456bd2f0345c097edd8e, regionState=CLOSED 2023-07-12 19:17:24,600 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689189444599"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189444599"}]},"ts":"1689189444599"} 2023-07-12 19:17:24,603 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-12 19:17:24,603 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 8481ed207217456bd2f0345c097edd8e, server=jenkins-hbase20.apache.org,36311,1689189430768 in 163 msec 2023-07-12 19:17:24,605 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-12 19:17:24,605 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=8481ed207217456bd2f0345c097edd8e, UNASSIGN in 173 msec 2023-07-12 19:17:24,606 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189444606"}]},"ts":"1689189444606"} 2023-07-12 19:17:24,608 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-12 19:17:24,609 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-12 19:17:24,612 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 190 msec 2023-07-12 19:17:24,644 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 19:17:24,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-12 19:17:24,728 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-12 19:17:24,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveB 2023-07-12 19:17:24,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:24,733 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:24,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_140681636' 2023-07-12 19:17:24,734 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:24,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:24,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:24,740 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:24,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:24,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:24,743 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/recovered.edits] 2023-07-12 19:17:24,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 19:17:24,754 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/recovered.edits/7.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e/recovered.edits/7.seqid 2023-07-12 19:17:24,754 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/GrouptestMultiTableMoveB/8481ed207217456bd2f0345c097edd8e 2023-07-12 19:17:24,755 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-12 19:17:24,758 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:24,771 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-12 19:17:24,775 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-12 19:17:24,776 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:24,776 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-12 19:17:24,777 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189444776"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:24,778 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 19:17:24,778 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8481ed207217456bd2f0345c097edd8e, NAME => 'GrouptestMultiTableMoveB,,1689189442347.8481ed207217456bd2f0345c097edd8e.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 19:17:24,778 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-12 19:17:24,779 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189444778"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:24,781 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-12 19:17:24,783 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-12 19:17:24,784 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 53 msec 2023-07-12 19:17:24,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-12 19:17:24,850 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-12 19:17:24,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:24,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:24,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:24,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:24,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:24,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311] to rsgroup default 2023-07-12 19:17:24,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:24,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_140681636 2023-07-12 19:17:24,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:24,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:24,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_140681636, current retry=0 2023-07-12 19:17:24,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768] are moved back to Group_testMultiTableMove_140681636 2023-07-12 19:17:24,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_140681636 => default 2023-07-12 19:17:24,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:24,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testMultiTableMove_140681636 2023-07-12 19:17:24,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:24,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:24,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:24,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:24,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:24,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:24,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:24,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:24,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:24,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:24,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:24,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:24,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:24,898 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:24,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:24,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:24,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:24,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:24,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:24,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:24,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:24,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:24,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:24,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 509 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190644921, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:24,923 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:24,925 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:24,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:24,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:24,926 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:24,933 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:24,933 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:24,957 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=500 (was 505), OpenFileDescriptor=766 (was 782), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=529 (was 462) - SystemLoadAverage LEAK? -, ProcessCount=169 (was 169), AvailableMemoryMB=4673 (was 3790) - AvailableMemoryMB LEAK? - 2023-07-12 19:17:24,979 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=500, OpenFileDescriptor=766, MaxFileDescriptor=60000, SystemLoadAverage=529, ProcessCount=169, AvailableMemoryMB=4666 2023-07-12 19:17:24,979 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-12 19:17:24,984 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:24,984 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:24,985 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:24,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:24,986 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:24,987 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:24,987 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:24,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:24,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:24,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:24,994 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:24,997 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:24,998 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:25,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:25,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:25,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:25,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:25,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.CallRunner(144): callId: 537 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190645016, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:25,017 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:25,019 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:25,020 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,020 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,020 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:25,021 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:25,021 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,022 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:25,022 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,023 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldGroup 2023-07-12 19:17:25,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 19:17:25,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:25,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:25,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,042 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup oldGroup 2023-07-12 19:17:25,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 19:17:25,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:25,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 19:17:25,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to default 2023-07-12 19:17:25,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-12 19:17:25,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:25,069 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,069 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,073 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 19:17:25,073 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,074 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-12 19:17:25,074 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,075 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:25,075 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup anotherRSGroup 2023-07-12 19:17:25,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 19:17:25,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 19:17:25,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:25,360 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:25,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,366 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,369 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39963] to rsgroup anotherRSGroup 2023-07-12 19:17:25,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 19:17:25,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 19:17:25,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:25,375 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 19:17:25,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,39963,1689189426501] are moved back to default 2023-07-12 19:17:25,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-12 19:17:25,377 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:25,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 19:17:25,385 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-12 19:17:25,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-12 19:17:25,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:25,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 113 connection: 148.251.75.209:37696 deadline: 1689190645395, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-12 19:17:25,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to anotherRSGroup 2023-07-12 19:17:25,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:25,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 106 connection: 148.251.75.209:37696 deadline: 1689190645399, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-12 19:17:25,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from default to newRSGroup2 2023-07-12 19:17:25,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:25,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 102 connection: 148.251.75.209:37696 deadline: 1689190645401, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-12 19:17:25,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to default 2023-07-12 19:17:25,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:25,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33033] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 99 connection: 148.251.75.209:37696 deadline: 1689190645403, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-12 19:17:25,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:25,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:25,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:25,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39963] to rsgroup default 2023-07-12 19:17:25,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-12 19:17:25,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 19:17:25,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:25,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-12 19:17:25,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,39963,1689189426501] are moved back to anotherRSGroup 2023-07-12 19:17:25,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-12 19:17:25,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:25,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup anotherRSGroup 2023-07-12 19:17:25,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 19:17:25,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 19:17:25,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:25,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:25,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:25,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:25,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup default 2023-07-12 19:17:25,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-12 19:17:25,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:25,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-12 19:17:25,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to oldGroup 2023-07-12 19:17:25,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-12 19:17:25,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:25,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup oldGroup 2023-07-12 19:17:25,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:25,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:25,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:25,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:25,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:25,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:25,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:25,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:25,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:25,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:25,463 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:25,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:25,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:25,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:25,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:25,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:25,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 613 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190645491, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:25,492 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:25,494 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:25,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,495 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:25,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:25,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,515 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=503 (was 500) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=766 (was 766), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=529 (was 529), ProcessCount=169 (was 169), AvailableMemoryMB=4480 (was 4666) 2023-07-12 19:17:25,515 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 19:17:25,536 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=503, OpenFileDescriptor=766, MaxFileDescriptor=60000, SystemLoadAverage=529, ProcessCount=169, AvailableMemoryMB=4470 2023-07-12 19:17:25,536 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-12 19:17:25,537 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-12 19:17:25,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:25,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:25,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:25,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:25,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:25,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:25,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:25,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:25,555 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:25,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:25,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:25,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:25,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:25,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:25,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 641 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190645578, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:25,579 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:25,581 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:25,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,583 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:25,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:25,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:25,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldgroup 2023-07-12 19:17:25,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:25,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:25,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:25,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup oldgroup 2023-07-12 19:17:25,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:25,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:25,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 19:17:25,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to default 2023-07-12 19:17:25,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-12 19:17:25,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:25,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:25,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:25,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 19:17:25,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:25,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:25,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-12 19:17:25,631 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:25,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 111 2023-07-12 19:17:25,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 19:17:25,638 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:25,639 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:25,640 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:25,640 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:25,657 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:25,659 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:25,660 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 empty. 2023-07-12 19:17:25,661 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:25,661 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-12 19:17:25,689 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:25,690 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => aeeb3efc5a8573e6eca018aeb06a2077, NAME => 'testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:25,708 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:25,709 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing aeeb3efc5a8573e6eca018aeb06a2077, disabling compactions & flushes 2023-07-12 19:17:25,709 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:25,709 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:25,709 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. after waiting 0 ms 2023-07-12 19:17:25,709 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:25,709 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:25,709 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for aeeb3efc5a8573e6eca018aeb06a2077: 2023-07-12 19:17:25,712 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:25,713 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189445713"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189445713"}]},"ts":"1689189445713"} 2023-07-12 19:17:25,714 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:25,715 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:25,715 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189445715"}]},"ts":"1689189445715"} 2023-07-12 19:17:25,717 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-12 19:17:25,730 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:25,731 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:25,731 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:25,731 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:25,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, ASSIGN}] 2023-07-12 19:17:25,734 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, ASSIGN 2023-07-12 19:17:25,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 19:17:25,735 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:25,885 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:25,886 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:25,887 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189445886"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189445886"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189445886"}]},"ts":"1689189445886"} 2023-07-12 19:17:25,888 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:25,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 19:17:26,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aeeb3efc5a8573e6eca018aeb06a2077, NAME => 'testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:26,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:26,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,046 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,048 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,050 DEBUG [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/tr 2023-07-12 19:17:26,050 DEBUG [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/tr 2023-07-12 19:17:26,051 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aeeb3efc5a8573e6eca018aeb06a2077 columnFamilyName tr 2023-07-12 19:17:26,051 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] regionserver.HStore(310): Store=aeeb3efc5a8573e6eca018aeb06a2077/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:26,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:26,060 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened aeeb3efc5a8573e6eca018aeb06a2077; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11631950720, jitterRate=0.0833098292350769}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:26,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for aeeb3efc5a8573e6eca018aeb06a2077: 2023-07-12 19:17:26,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077., pid=113, masterSystemTime=1689189446041 2023-07-12 19:17:26,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,066 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:26,067 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189446066"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189446066"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189446066"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189446066"}]},"ts":"1689189446066"} 2023-07-12 19:17:26,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-12 19:17:26,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,39963,1689189426501 in 181 msec 2023-07-12 19:17:26,074 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-12 19:17:26,074 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, ASSIGN in 341 msec 2023-07-12 19:17:26,075 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:26,075 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189446075"}]},"ts":"1689189446075"} 2023-07-12 19:17:26,078 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-12 19:17:26,081 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:26,083 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=testRename in 454 msec 2023-07-12 19:17:26,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-12 19:17:26,237 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 111 completed 2023-07-12 19:17:26,238 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-12 19:17:26,238 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:26,249 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-12 19:17:26,250 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:26,250 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-12 19:17:26,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup oldgroup 2023-07-12 19:17:26,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:26,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:26,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:26,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:26,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-12 19:17:26,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region aeeb3efc5a8573e6eca018aeb06a2077 to RSGroup oldgroup 2023-07-12 19:17:26,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:26,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:26,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:26,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:26,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:26,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE 2023-07-12 19:17:26,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-12 19:17:26,262 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE 2023-07-12 19:17:26,264 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:26,264 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189446264"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189446264"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189446264"}]},"ts":"1689189446264"} 2023-07-12 19:17:26,266 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:26,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing aeeb3efc5a8573e6eca018aeb06a2077, disabling compactions & flushes 2023-07-12 19:17:26,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. after waiting 0 ms 2023-07-12 19:17:26,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:26,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for aeeb3efc5a8573e6eca018aeb06a2077: 2023-07-12 19:17:26,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding aeeb3efc5a8573e6eca018aeb06a2077 move to jenkins-hbase20.apache.org,36571,1689189426727 record at close sequenceid=2 2023-07-12 19:17:26,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,442 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=CLOSED 2023-07-12 19:17:26,442 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189446442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189446442"}]},"ts":"1689189446442"} 2023-07-12 19:17:26,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-12 19:17:26,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,39963,1689189426501 in 178 msec 2023-07-12 19:17:26,446 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,36571,1689189426727; forceNewPlan=false, retain=false 2023-07-12 19:17:26,596 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:26,597 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:26,597 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189446597"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189446597"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189446597"}]},"ts":"1689189446597"} 2023-07-12 19:17:26,599 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:26,758 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aeeb3efc5a8573e6eca018aeb06a2077, NAME => 'testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:26,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:26,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,769 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,777 DEBUG [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/tr 2023-07-12 19:17:26,777 DEBUG [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/tr 2023-07-12 19:17:26,779 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aeeb3efc5a8573e6eca018aeb06a2077 columnFamilyName tr 2023-07-12 19:17:26,780 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] regionserver.HStore(310): Store=aeeb3efc5a8573e6eca018aeb06a2077/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:26,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:26,793 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened aeeb3efc5a8573e6eca018aeb06a2077; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11386909760, jitterRate=0.060488611459732056}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:26,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for aeeb3efc5a8573e6eca018aeb06a2077: 2023-07-12 19:17:26,795 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077., pid=116, masterSystemTime=1689189446751 2023-07-12 19:17:26,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,797 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:26,798 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:26,798 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189446798"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189446798"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189446798"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189446798"}]},"ts":"1689189446798"} 2023-07-12 19:17:26,806 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-12 19:17:26,806 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,36571,1689189426727 in 204 msec 2023-07-12 19:17:26,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE in 545 msec 2023-07-12 19:17:26,987 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-12 19:17:27,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-12 19:17:27,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-12 19:17:27,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:27,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:27,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:27,268 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:27,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-12 19:17:27,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:27,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-12 19:17:27,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:27,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-12 19:17:27,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:27,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:27,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:27,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup normal 2023-07-12 19:17:27,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:27,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 19:17:27,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:27,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:27,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:27,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:27,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:27,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:27,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39963] to rsgroup normal 2023-07-12 19:17:27,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:27,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 19:17:27,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:27,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:27,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:27,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 19:17:27,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,39963,1689189426501] are moved back to default 2023-07-12 19:17:27,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-12 19:17:27,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:27,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:27,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:27,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-12 19:17:27,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:27,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:27,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-12 19:17:27,326 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:27,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 117 2023-07-12 19:17:27,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 19:17:27,329 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:27,330 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 19:17:27,330 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:27,330 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:27,331 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:27,333 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:27,335 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:27,335 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 empty. 2023-07-12 19:17:27,336 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:27,336 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-12 19:17:27,358 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:27,361 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 845df8e2a52065b03d70b26a7a732653, NAME => 'unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:27,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 19:17:27,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 19:17:27,785 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:27,785 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 845df8e2a52065b03d70b26a7a732653, disabling compactions & flushes 2023-07-12 19:17:27,786 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:27,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:27,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. after waiting 0 ms 2023-07-12 19:17:27,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:27,786 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:27,786 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 845df8e2a52065b03d70b26a7a732653: 2023-07-12 19:17:27,789 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:27,790 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189447790"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189447790"}]},"ts":"1689189447790"} 2023-07-12 19:17:27,792 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:27,793 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:27,793 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189447793"}]},"ts":"1689189447793"} 2023-07-12 19:17:27,795 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-12 19:17:27,799 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, ASSIGN}] 2023-07-12 19:17:27,806 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, ASSIGN 2023-07-12 19:17:27,808 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:27,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 19:17:27,960 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:27,961 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189447960"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189447960"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189447960"}]},"ts":"1689189447960"} 2023-07-12 19:17:27,964 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:28,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 845df8e2a52065b03d70b26a7a732653, NAME => 'unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:28,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:28,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,121 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,123 DEBUG [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/ut 2023-07-12 19:17:28,123 DEBUG [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/ut 2023-07-12 19:17:28,123 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 845df8e2a52065b03d70b26a7a732653 columnFamilyName ut 2023-07-12 19:17:28,124 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] regionserver.HStore(310): Store=845df8e2a52065b03d70b26a7a732653/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:28,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:28,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 845df8e2a52065b03d70b26a7a732653; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10635414880, jitterRate=-0.009499803185462952}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:28,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 845df8e2a52065b03d70b26a7a732653: 2023-07-12 19:17:28,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653., pid=119, masterSystemTime=1689189448116 2023-07-12 19:17:28,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,134 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:28,134 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189448134"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189448134"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189448134"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189448134"}]},"ts":"1689189448134"} 2023-07-12 19:17:28,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-12 19:17:28,137 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,43021,1689189426641 in 172 msec 2023-07-12 19:17:28,139 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-12 19:17:28,139 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, ASSIGN in 339 msec 2023-07-12 19:17:28,139 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:28,139 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189448139"}]},"ts":"1689189448139"} 2023-07-12 19:17:28,141 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-12 19:17:28,143 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:28,144 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=unmovedTable in 824 msec 2023-07-12 19:17:28,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-12 19:17:28,434 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 117 completed 2023-07-12 19:17:28,434 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-12 19:17:28,435 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:28,438 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-12 19:17:28,438 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:28,438 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-12 19:17:28,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup normal 2023-07-12 19:17:28,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-12 19:17:28,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 19:17:28,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:28,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:28,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:28,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-12 19:17:28,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 845df8e2a52065b03d70b26a7a732653 to RSGroup normal 2023-07-12 19:17:28,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE 2023-07-12 19:17:28,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-12 19:17:28,447 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE 2023-07-12 19:17:28,447 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:28,448 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189448447"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189448447"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189448447"}]},"ts":"1689189448447"} 2023-07-12 19:17:28,449 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:28,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 845df8e2a52065b03d70b26a7a732653, disabling compactions & flushes 2023-07-12 19:17:28,603 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. after waiting 0 ms 2023-07-12 19:17:28,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:28,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 845df8e2a52065b03d70b26a7a732653: 2023-07-12 19:17:28,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 845df8e2a52065b03d70b26a7a732653 move to jenkins-hbase20.apache.org,39963,1689189426501 record at close sequenceid=2 2023-07-12 19:17:28,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,610 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=CLOSED 2023-07-12 19:17:28,610 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189448610"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189448610"}]},"ts":"1689189448610"} 2023-07-12 19:17:28,618 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-12 19:17:28,618 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,43021,1689189426641 in 163 msec 2023-07-12 19:17:28,619 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:28,770 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:28,770 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189448770"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189448770"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189448770"}]},"ts":"1689189448770"} 2023-07-12 19:17:28,774 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:28,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 845df8e2a52065b03d70b26a7a732653, NAME => 'unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:28,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:28,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,933 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,934 DEBUG [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/ut 2023-07-12 19:17:28,934 DEBUG [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/ut 2023-07-12 19:17:28,934 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 845df8e2a52065b03d70b26a7a732653 columnFamilyName ut 2023-07-12 19:17:28,935 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] regionserver.HStore(310): Store=845df8e2a52065b03d70b26a7a732653/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:28,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:28,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 845df8e2a52065b03d70b26a7a732653; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11114276960, jitterRate=0.03509770333766937}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:28,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 845df8e2a52065b03d70b26a7a732653: 2023-07-12 19:17:28,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653., pid=122, masterSystemTime=1689189448926 2023-07-12 19:17:28,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,942 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:28,943 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:28,943 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189448943"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189448943"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189448943"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189448943"}]},"ts":"1689189448943"} 2023-07-12 19:17:28,946 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-12 19:17:28,946 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,39963,1689189426501 in 171 msec 2023-07-12 19:17:28,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE in 500 msec 2023-07-12 19:17:29,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-12 19:17:29,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-12 19:17:29,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:29,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:29,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:29,453 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:29,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 19:17:29,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:29,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-12 19:17:29,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:29,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 19:17:29,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:29,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldgroup to newgroup 2023-07-12 19:17:29,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 19:17:29,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:29,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:29,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 19:17:29,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-12 19:17:29,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RenameRSGroup 2023-07-12 19:17:29,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:29,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:29,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=newgroup 2023-07-12 19:17:29,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:29,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-12 19:17:29,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:29,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-12 19:17:29,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:29,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:29,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:29,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup default 2023-07-12 19:17:29,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 19:17:29,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:29,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:29,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 19:17:29,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:29,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-12 19:17:29,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region 845df8e2a52065b03d70b26a7a732653 to RSGroup default 2023-07-12 19:17:29,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE 2023-07-12 19:17:29,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 19:17:29,485 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE 2023-07-12 19:17:29,485 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:29,485 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189449485"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189449485"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189449485"}]},"ts":"1689189449485"} 2023-07-12 19:17:29,487 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:29,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 845df8e2a52065b03d70b26a7a732653, disabling compactions & flushes 2023-07-12 19:17:29,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:29,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:29,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. after waiting 0 ms 2023-07-12 19:17:29,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:29,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:29,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:29,644 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 845df8e2a52065b03d70b26a7a732653: 2023-07-12 19:17:29,644 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 845df8e2a52065b03d70b26a7a732653 move to jenkins-hbase20.apache.org,43021,1689189426641 record at close sequenceid=5 2023-07-12 19:17:29,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,647 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=CLOSED 2023-07-12 19:17:29,647 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189449647"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189449647"}]},"ts":"1689189449647"} 2023-07-12 19:17:29,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-12 19:17:29,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,39963,1689189426501 in 162 msec 2023-07-12 19:17:29,650 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:29,669 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-12 19:17:29,801 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:29,801 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189449801"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189449801"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189449801"}]},"ts":"1689189449801"} 2023-07-12 19:17:29,803 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:29,959 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:29,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 845df8e2a52065b03d70b26a7a732653, NAME => 'unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:29,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:29,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,961 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,962 DEBUG [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/ut 2023-07-12 19:17:29,962 DEBUG [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/ut 2023-07-12 19:17:29,963 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 845df8e2a52065b03d70b26a7a732653 columnFamilyName ut 2023-07-12 19:17:29,963 INFO [StoreOpener-845df8e2a52065b03d70b26a7a732653-1] regionserver.HStore(310): Store=845df8e2a52065b03d70b26a7a732653/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:29,964 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:29,969 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 845df8e2a52065b03d70b26a7a732653; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11371795840, jitterRate=0.05908101797103882}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:29,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 845df8e2a52065b03d70b26a7a732653: 2023-07-12 19:17:29,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653., pid=125, masterSystemTime=1689189449954 2023-07-12 19:17:29,972 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:29,972 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:29,972 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=845df8e2a52065b03d70b26a7a732653, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:29,972 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689189449972"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189449972"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189449972"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189449972"}]},"ts":"1689189449972"} 2023-07-12 19:17:29,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-12 19:17:29,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 845df8e2a52065b03d70b26a7a732653, server=jenkins-hbase20.apache.org,43021,1689189426641 in 171 msec 2023-07-12 19:17:29,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=845df8e2a52065b03d70b26a7a732653, REOPEN/MOVE in 492 msec 2023-07-12 19:17:30,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-12 19:17:30,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-12 19:17:30,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:30,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:39963] to rsgroup default 2023-07-12 19:17:30,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-12 19:17:30,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:30,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:30,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 19:17:30,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:30,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-12 19:17:30,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,39963,1689189426501] are moved back to normal 2023-07-12 19:17:30,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-12 19:17:30,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:30,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup normal 2023-07-12 19:17:30,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:30,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:30,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 19:17:30,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 19:17:30,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:30,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:30,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:30,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:30,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:30,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:30,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:30,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:30,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 19:17:30,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:30,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:30,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup default 2023-07-12 19:17:30,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:30,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 19:17:30,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:30,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-12 19:17:30,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(345): Moving region aeeb3efc5a8573e6eca018aeb06a2077 to RSGroup default 2023-07-12 19:17:30,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE 2023-07-12 19:17:30,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-12 19:17:30,536 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE 2023-07-12 19:17:30,537 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:30,537 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189450537"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189450537"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189450537"}]},"ts":"1689189450537"} 2023-07-12 19:17:30,543 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,36571,1689189426727}] 2023-07-12 19:17:30,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:30,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing aeeb3efc5a8573e6eca018aeb06a2077, disabling compactions & flushes 2023-07-12 19:17:30,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:30,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:30,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. after waiting 0 ms 2023-07-12 19:17:30,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:30,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-12 19:17:30,709 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:30,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for aeeb3efc5a8573e6eca018aeb06a2077: 2023-07-12 19:17:30,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding aeeb3efc5a8573e6eca018aeb06a2077 move to jenkins-hbase20.apache.org,39963,1689189426501 record at close sequenceid=5 2023-07-12 19:17:30,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:30,716 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=CLOSED 2023-07-12 19:17:30,716 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189450716"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189450716"}]},"ts":"1689189450716"} 2023-07-12 19:17:30,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-12 19:17:30,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,36571,1689189426727 in 175 msec 2023-07-12 19:17:30,726 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:30,877 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:30,877 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:30,877 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189450877"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189450877"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189450877"}]},"ts":"1689189450877"} 2023-07-12 19:17:30,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:31,035 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:31,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aeeb3efc5a8573e6eca018aeb06a2077, NAME => 'testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:31,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:31,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:31,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:31,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:31,037 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:31,038 DEBUG [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/tr 2023-07-12 19:17:31,038 DEBUG [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/tr 2023-07-12 19:17:31,038 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aeeb3efc5a8573e6eca018aeb06a2077 columnFamilyName tr 2023-07-12 19:17:31,039 INFO [StoreOpener-aeeb3efc5a8573e6eca018aeb06a2077-1] regionserver.HStore(310): Store=aeeb3efc5a8573e6eca018aeb06a2077/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:31,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:31,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:31,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:31,044 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened aeeb3efc5a8573e6eca018aeb06a2077; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9910582880, jitterRate=-0.0770050436258316}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:31,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for aeeb3efc5a8573e6eca018aeb06a2077: 2023-07-12 19:17:31,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077., pid=128, masterSystemTime=1689189451031 2023-07-12 19:17:31,047 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:31,047 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:31,047 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=aeeb3efc5a8573e6eca018aeb06a2077, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:31,047 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689189451047"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189451047"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189451047"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189451047"}]},"ts":"1689189451047"} 2023-07-12 19:17:31,051 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-12 19:17:31,051 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure aeeb3efc5a8573e6eca018aeb06a2077, server=jenkins-hbase20.apache.org,39963,1689189426501 in 170 msec 2023-07-12 19:17:31,053 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=aeeb3efc5a8573e6eca018aeb06a2077, REOPEN/MOVE in 516 msec 2023-07-12 19:17:31,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-12 19:17:31,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-12 19:17:31,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:31,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup default 2023-07-12 19:17:31,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-12 19:17:31,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:31,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-12 19:17:31,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to newgroup 2023-07-12 19:17:31,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-12 19:17:31,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:31,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup newgroup 2023-07-12 19:17:31,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:31,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:31,559 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:31,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:31,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:31,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:31,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:31,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:31,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:31,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 763 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190651571, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:31,572 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:31,573 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:31,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,575 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:31,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:31,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:31,595 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=500 (was 503), OpenFileDescriptor=747 (was 766), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=494 (was 529), ProcessCount=173 (was 169) - ProcessCount LEAK? -, AvailableMemoryMB=3691 (was 4470) 2023-07-12 19:17:31,615 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=500, OpenFileDescriptor=747, MaxFileDescriptor=60000, SystemLoadAverage=494, ProcessCount=173, AvailableMemoryMB=3683 2023-07-12 19:17:31,615 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-12 19:17:31,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:31,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:31,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:31,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:31,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:31,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:31,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:31,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:31,638 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:31,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:31,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:31,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:31,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:31,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:31,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:31,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 791 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190651669, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:31,670 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:31,672 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:31,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,673 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:31,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:31,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:31,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=nonexistent 2023-07-12 19:17:31,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:31,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, server=bogus:123 2023-07-12 19:17:31,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-12 19:17:31,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bogus 2023-07-12 19:17:31,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:31,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bogus 2023-07-12 19:17:31,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:31,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 87 connection: 148.251.75.209:37696 deadline: 1689190651682, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-12 19:17:31,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [bogus:123] to rsgroup bogus 2023-07-12 19:17:31,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:31,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 96 connection: 148.251.75.209:37696 deadline: 1689190651684, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 19:17:31,686 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-12 19:17:31,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=true 2023-07-12 19:17:31,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//148.251.75.209 balance rsgroup, group=bogus 2023-07-12 19:17:31,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:31,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 810 service: MasterService methodName: ExecMasterService size: 88 connection: 148.251.75.209:37696 deadline: 1689190651691, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-12 19:17:31,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:31,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:31,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:31,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:31,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:31,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:31,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:31,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:31,713 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:31,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:31,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:31,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:31,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:31,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:31,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:31,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 834 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190651731, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:31,736 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:31,738 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:31,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,739 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:31,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:31,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:31,756 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=504 (was 500) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2eb50d9d-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x25a58e9d-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=747 (was 747), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=494 (was 494), ProcessCount=173 (was 173), AvailableMemoryMB=3667 (was 3683) 2023-07-12 19:17:31,757 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 19:17:31,775 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=504, OpenFileDescriptor=747, MaxFileDescriptor=60000, SystemLoadAverage=494, ProcessCount=173, AvailableMemoryMB=3666 2023-07-12 19:17:31,775 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 19:17:31,775 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-12 19:17:31,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:31,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:31,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:31,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:31,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:31,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:31,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:31,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:31,792 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:31,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:31,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:31,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:31,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:31,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:31,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:31,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 862 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190651807, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:31,808 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:31,810 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:31,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,812 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:31,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:31,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:31,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:31,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:31,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testDisabledTableMove_151152376 2023-07-12 19:17:31,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_151152376 2023-07-12 19:17:31,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:31,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:31,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:31,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup Group_testDisabledTableMove_151152376 2023-07-12 19:17:31,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_151152376 2023-07-12 19:17:31,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:31,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:31,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-12 19:17:31,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to default 2023-07-12 19:17:31,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_151152376 2023-07-12 19:17:31,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:31,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:31,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:31,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_151152376 2023-07-12 19:17:31,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:31,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:31,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:31,858 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:31,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 129 2023-07-12 19:17:31,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 19:17:31,861 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_151152376 2023-07-12 19:17:31,861 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:31,862 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:31,862 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:31,864 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:31,868 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:31,868 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:31,868 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:31,868 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:31,868 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:31,869 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459 empty. 2023-07-12 19:17:31,869 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579 empty. 2023-07-12 19:17:31,869 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df empty. 2023-07-12 19:17:31,869 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96 empty. 2023-07-12 19:17:31,869 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683 empty. 2023-07-12 19:17:31,870 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:31,870 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:31,870 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:31,870 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:31,870 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:31,870 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 19:17:31,895 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:31,896 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 0f907f22115b2f6fe3cbd1c88d6b3459, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:31,897 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => a51addfadca1ca5f6216cfc7b8179683, NAME => 'Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:31,897 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1de1cf82dbd8b232c0d007754e8b5579, NAME => 'Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:31,928 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:31,928 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing a51addfadca1ca5f6216cfc7b8179683, disabling compactions & flushes 2023-07-12 19:17:31,928 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:31,928 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:31,928 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. after waiting 0 ms 2023-07-12 19:17:31,928 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:31,928 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:31,928 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for a51addfadca1ca5f6216cfc7b8179683: 2023-07-12 19:17:31,929 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 52c51fc6cd79cc275857ddc3b19788df, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:31,937 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:31,937 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 1de1cf82dbd8b232c0d007754e8b5579, disabling compactions & flushes 2023-07-12 19:17:31,937 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:31,937 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:31,937 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. after waiting 0 ms 2023-07-12 19:17:31,938 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:31,938 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:31,938 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 1de1cf82dbd8b232c0d007754e8b5579: 2023-07-12 19:17:31,938 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9511f015ded348ca2f3336292aef0f96, NAME => 'Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp 2023-07-12 19:17:31,938 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:31,939 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 0f907f22115b2f6fe3cbd1c88d6b3459, disabling compactions & flushes 2023-07-12 19:17:31,939 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:31,939 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:31,939 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. after waiting 0 ms 2023-07-12 19:17:31,939 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:31,939 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:31,939 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 0f907f22115b2f6fe3cbd1c88d6b3459: 2023-07-12 19:17:31,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:31,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 52c51fc6cd79cc275857ddc3b19788df, disabling compactions & flushes 2023-07-12 19:17:31,951 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:31,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:31,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. after waiting 0 ms 2023-07-12 19:17:31,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:31,951 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:31,951 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 52c51fc6cd79cc275857ddc3b19788df: 2023-07-12 19:17:31,959 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:31,960 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 9511f015ded348ca2f3336292aef0f96, disabling compactions & flushes 2023-07-12 19:17:31,960 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:31,960 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:31,960 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. after waiting 0 ms 2023-07-12 19:17:31,960 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:31,960 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:31,960 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 9511f015ded348ca2f3336292aef0f96: 2023-07-12 19:17:31,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 19:17:31,963 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:31,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189451966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189451966"}]},"ts":"1689189451966"} 2023-07-12 19:17:31,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189451966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189451966"}]},"ts":"1689189451966"} 2023-07-12 19:17:31,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189451966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189451966"}]},"ts":"1689189451966"} 2023-07-12 19:17:31,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189451966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189451966"}]},"ts":"1689189451966"} 2023-07-12 19:17:31,966 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189451966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189451966"}]},"ts":"1689189451966"} 2023-07-12 19:17:31,975 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-12 19:17:31,975 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:31,976 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189451976"}]},"ts":"1689189451976"} 2023-07-12 19:17:31,977 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-12 19:17:31,979 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:31,979 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:31,979 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:31,979 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:31,980 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1de1cf82dbd8b232c0d007754e8b5579, ASSIGN}, {pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a51addfadca1ca5f6216cfc7b8179683, ASSIGN}, {pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f907f22115b2f6fe3cbd1c88d6b3459, ASSIGN}, {pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=52c51fc6cd79cc275857ddc3b19788df, ASSIGN}, {pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9511f015ded348ca2f3336292aef0f96, ASSIGN}] 2023-07-12 19:17:31,982 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a51addfadca1ca5f6216cfc7b8179683, ASSIGN 2023-07-12 19:17:31,982 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1de1cf82dbd8b232c0d007754e8b5579, ASSIGN 2023-07-12 19:17:31,982 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9511f015ded348ca2f3336292aef0f96, ASSIGN 2023-07-12 19:17:31,982 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=52c51fc6cd79cc275857ddc3b19788df, ASSIGN 2023-07-12 19:17:31,985 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a51addfadca1ca5f6216cfc7b8179683, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:31,986 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f907f22115b2f6fe3cbd1c88d6b3459, ASSIGN 2023-07-12 19:17:31,986 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1de1cf82dbd8b232c0d007754e8b5579, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:31,986 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=52c51fc6cd79cc275857ddc3b19788df, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,43021,1689189426641; forceNewPlan=false, retain=false 2023-07-12 19:17:31,986 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9511f015ded348ca2f3336292aef0f96, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:31,988 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f907f22115b2f6fe3cbd1c88d6b3459, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,39963,1689189426501; forceNewPlan=false, retain=false 2023-07-12 19:17:32,136 INFO [jenkins-hbase20:33033] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-12 19:17:32,141 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=52c51fc6cd79cc275857ddc3b19788df, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:32,141 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=a51addfadca1ca5f6216cfc7b8179683, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,141 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452141"}]},"ts":"1689189452141"} 2023-07-12 19:17:32,141 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452141"}]},"ts":"1689189452141"} 2023-07-12 19:17:32,141 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=0f907f22115b2f6fe3cbd1c88d6b3459, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,141 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=1de1cf82dbd8b232c0d007754e8b5579, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:32,142 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452141"}]},"ts":"1689189452141"} 2023-07-12 19:17:32,141 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=9511f015ded348ca2f3336292aef0f96, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,142 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452141"}]},"ts":"1689189452141"} 2023-07-12 19:17:32,142 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452141"}]},"ts":"1689189452141"} 2023-07-12 19:17:32,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=133, state=RUNNABLE; OpenRegionProcedure 52c51fc6cd79cc275857ddc3b19788df, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:32,144 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=131, state=RUNNABLE; OpenRegionProcedure a51addfadca1ca5f6216cfc7b8179683, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:32,145 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=130, state=RUNNABLE; OpenRegionProcedure 1de1cf82dbd8b232c0d007754e8b5579, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:32,147 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=132, state=RUNNABLE; OpenRegionProcedure 0f907f22115b2f6fe3cbd1c88d6b3459, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:32,151 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure 9511f015ded348ca2f3336292aef0f96, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:32,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 19:17:32,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:32,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 52c51fc6cd79cc275857ddc3b19788df, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-12 19:17:32,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:32,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:32,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0f907f22115b2f6fe3cbd1c88d6b3459, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-12 19:17:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,304 INFO [StoreOpener-52c51fc6cd79cc275857ddc3b19788df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,304 INFO [StoreOpener-0f907f22115b2f6fe3cbd1c88d6b3459-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,306 DEBUG [StoreOpener-52c51fc6cd79cc275857ddc3b19788df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/f 2023-07-12 19:17:32,306 DEBUG [StoreOpener-52c51fc6cd79cc275857ddc3b19788df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/f 2023-07-12 19:17:32,306 DEBUG [StoreOpener-0f907f22115b2f6fe3cbd1c88d6b3459-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/f 2023-07-12 19:17:32,306 DEBUG [StoreOpener-0f907f22115b2f6fe3cbd1c88d6b3459-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/f 2023-07-12 19:17:32,307 INFO [StoreOpener-52c51fc6cd79cc275857ddc3b19788df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 52c51fc6cd79cc275857ddc3b19788df columnFamilyName f 2023-07-12 19:17:32,307 INFO [StoreOpener-52c51fc6cd79cc275857ddc3b19788df-1] regionserver.HStore(310): Store=52c51fc6cd79cc275857ddc3b19788df/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:32,310 INFO [StoreOpener-0f907f22115b2f6fe3cbd1c88d6b3459-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0f907f22115b2f6fe3cbd1c88d6b3459 columnFamilyName f 2023-07-12 19:17:32,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,312 INFO [StoreOpener-0f907f22115b2f6fe3cbd1c88d6b3459-1] regionserver.HStore(310): Store=0f907f22115b2f6fe3cbd1c88d6b3459/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:32,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:32,324 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 52c51fc6cd79cc275857ddc3b19788df; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9840917280, jitterRate=-0.08349315822124481}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:32,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 52c51fc6cd79cc275857ddc3b19788df: 2023-07-12 19:17:32,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,325 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df., pid=135, masterSystemTime=1689189452295 2023-07-12 19:17:32,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:32,327 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:32,328 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:32,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1de1cf82dbd8b232c0d007754e8b5579, NAME => 'Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-12 19:17:32,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:32,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,335 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=52c51fc6cd79cc275857ddc3b19788df, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:32,335 INFO [StoreOpener-1de1cf82dbd8b232c0d007754e8b5579-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,335 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452334"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189452334"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189452334"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189452334"}]},"ts":"1689189452334"} 2023-07-12 19:17:32,337 DEBUG [StoreOpener-1de1cf82dbd8b232c0d007754e8b5579-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/f 2023-07-12 19:17:32,337 DEBUG [StoreOpener-1de1cf82dbd8b232c0d007754e8b5579-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/f 2023-07-12 19:17:32,338 INFO [StoreOpener-1de1cf82dbd8b232c0d007754e8b5579-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1de1cf82dbd8b232c0d007754e8b5579 columnFamilyName f 2023-07-12 19:17:32,341 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=133 2023-07-12 19:17:32,341 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=133, state=SUCCESS; OpenRegionProcedure 52c51fc6cd79cc275857ddc3b19788df, server=jenkins-hbase20.apache.org,43021,1689189426641 in 194 msec 2023-07-12 19:17:32,343 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=52c51fc6cd79cc275857ddc3b19788df, ASSIGN in 361 msec 2023-07-12 19:17:32,356 INFO [StoreOpener-1de1cf82dbd8b232c0d007754e8b5579-1] regionserver.HStore(310): Store=1de1cf82dbd8b232c0d007754e8b5579/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:32,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:32,357 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 0f907f22115b2f6fe3cbd1c88d6b3459; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9853168000, jitterRate=-0.08235222101211548}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:32,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 0f907f22115b2f6fe3cbd1c88d6b3459: 2023-07-12 19:17:32,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,358 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459., pid=138, masterSystemTime=1689189452297 2023-07-12 19:17:32,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:32,360 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:32,360 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:32,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a51addfadca1ca5f6216cfc7b8179683, NAME => 'Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-12 19:17:32,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:32,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,361 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=0f907f22115b2f6fe3cbd1c88d6b3459, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,361 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452361"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189452361"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189452361"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189452361"}]},"ts":"1689189452361"} 2023-07-12 19:17:32,362 INFO [StoreOpener-a51addfadca1ca5f6216cfc7b8179683-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:32,364 DEBUG [StoreOpener-a51addfadca1ca5f6216cfc7b8179683-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/f 2023-07-12 19:17:32,364 DEBUG [StoreOpener-a51addfadca1ca5f6216cfc7b8179683-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/f 2023-07-12 19:17:32,365 INFO [StoreOpener-a51addfadca1ca5f6216cfc7b8179683-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a51addfadca1ca5f6216cfc7b8179683 columnFamilyName f 2023-07-12 19:17:32,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1de1cf82dbd8b232c0d007754e8b5579; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11308509440, jitterRate=0.053187012672424316}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:32,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1de1cf82dbd8b232c0d007754e8b5579: 2023-07-12 19:17:32,365 INFO [StoreOpener-a51addfadca1ca5f6216cfc7b8179683-1] regionserver.HStore(310): Store=a51addfadca1ca5f6216cfc7b8179683/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:32,366 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=132 2023-07-12 19:17:32,366 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=132, state=SUCCESS; OpenRegionProcedure 0f907f22115b2f6fe3cbd1c88d6b3459, server=jenkins-hbase20.apache.org,39963,1689189426501 in 216 msec 2023-07-12 19:17:32,366 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579., pid=137, masterSystemTime=1689189452295 2023-07-12 19:17:32,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,368 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f907f22115b2f6fe3cbd1c88d6b3459, ASSIGN in 386 msec 2023-07-12 19:17:32,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:32,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:32,369 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=1de1cf82dbd8b232c0d007754e8b5579, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:32,369 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452369"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189452369"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189452369"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189452369"}]},"ts":"1689189452369"} 2023-07-12 19:17:32,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,375 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=130 2023-07-12 19:17:32,376 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=130, state=SUCCESS; OpenRegionProcedure 1de1cf82dbd8b232c0d007754e8b5579, server=jenkins-hbase20.apache.org,43021,1689189426641 in 225 msec 2023-07-12 19:17:32,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:32,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1de1cf82dbd8b232c0d007754e8b5579, ASSIGN in 395 msec 2023-07-12 19:17:32,377 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened a51addfadca1ca5f6216cfc7b8179683; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10160627360, jitterRate=-0.05371783673763275}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:32,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for a51addfadca1ca5f6216cfc7b8179683: 2023-07-12 19:17:32,378 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683., pid=136, masterSystemTime=1689189452297 2023-07-12 19:17:32,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:32,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:32,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:32,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9511f015ded348ca2f3336292aef0f96, NAME => 'Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-12 19:17:32,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,387 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=a51addfadca1ca5f6216cfc7b8179683, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,387 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452386"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189452386"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189452386"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189452386"}]},"ts":"1689189452386"} 2023-07-12 19:17:32,387 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:32,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,391 INFO [StoreOpener-9511f015ded348ca2f3336292aef0f96-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=131 2023-07-12 19:17:32,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=131, state=SUCCESS; OpenRegionProcedure a51addfadca1ca5f6216cfc7b8179683, server=jenkins-hbase20.apache.org,39963,1689189426501 in 246 msec 2023-07-12 19:17:32,395 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a51addfadca1ca5f6216cfc7b8179683, ASSIGN in 413 msec 2023-07-12 19:17:32,395 DEBUG [StoreOpener-9511f015ded348ca2f3336292aef0f96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/f 2023-07-12 19:17:32,395 DEBUG [StoreOpener-9511f015ded348ca2f3336292aef0f96-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/f 2023-07-12 19:17:32,396 INFO [StoreOpener-9511f015ded348ca2f3336292aef0f96-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9511f015ded348ca2f3336292aef0f96 columnFamilyName f 2023-07-12 19:17:32,397 INFO [StoreOpener-9511f015ded348ca2f3336292aef0f96-1] regionserver.HStore(310): Store=9511f015ded348ca2f3336292aef0f96/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:32,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:32,417 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 9511f015ded348ca2f3336292aef0f96; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10967002240, jitterRate=0.021381676197052002}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:32,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 9511f015ded348ca2f3336292aef0f96: 2023-07-12 19:17:32,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96., pid=139, masterSystemTime=1689189452297 2023-07-12 19:17:32,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:32,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:32,422 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=9511f015ded348ca2f3336292aef0f96, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,422 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452422"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189452422"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189452422"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189452422"}]},"ts":"1689189452422"} 2023-07-12 19:17:32,426 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-12 19:17:32,426 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure 9511f015ded348ca2f3336292aef0f96, server=jenkins-hbase20.apache.org,39963,1689189426501 in 273 msec 2023-07-12 19:17:32,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=129 2023-07-12 19:17:32,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9511f015ded348ca2f3336292aef0f96, ASSIGN in 446 msec 2023-07-12 19:17:32,428 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:32,428 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189452428"}]},"ts":"1689189452428"} 2023-07-12 19:17:32,430 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-12 19:17:32,435 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:32,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 580 msec 2023-07-12 19:17:32,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-12 19:17:32,463 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 129 completed 2023-07-12 19:17:32,463 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-12 19:17:32,464 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:32,469 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-12 19:17:32,469 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:32,469 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-12 19:17:32,470 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:32,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 19:17:32,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:32,480 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 19:17:32,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-12 19:17:32,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:32,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 19:17:32,485 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189452485"}]},"ts":"1689189452485"} 2023-07-12 19:17:32,486 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-12 19:17:32,487 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-12 19:17:32,488 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1de1cf82dbd8b232c0d007754e8b5579, UNASSIGN}, {pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a51addfadca1ca5f6216cfc7b8179683, UNASSIGN}, {pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f907f22115b2f6fe3cbd1c88d6b3459, UNASSIGN}, {pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=52c51fc6cd79cc275857ddc3b19788df, UNASSIGN}, {pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9511f015ded348ca2f3336292aef0f96, UNASSIGN}] 2023-07-12 19:17:32,490 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f907f22115b2f6fe3cbd1c88d6b3459, UNASSIGN 2023-07-12 19:17:32,490 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1de1cf82dbd8b232c0d007754e8b5579, UNASSIGN 2023-07-12 19:17:32,490 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=52c51fc6cd79cc275857ddc3b19788df, UNASSIGN 2023-07-12 19:17:32,490 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a51addfadca1ca5f6216cfc7b8179683, UNASSIGN 2023-07-12 19:17:32,490 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9511f015ded348ca2f3336292aef0f96, UNASSIGN 2023-07-12 19:17:32,491 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=1de1cf82dbd8b232c0d007754e8b5579, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:32,491 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=0f907f22115b2f6fe3cbd1c88d6b3459, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,491 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=52c51fc6cd79cc275857ddc3b19788df, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:32,491 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452491"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452491"}]},"ts":"1689189452491"} 2023-07-12 19:17:32,492 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452491"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452491"}]},"ts":"1689189452491"} 2023-07-12 19:17:32,491 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452491"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452491"}]},"ts":"1689189452491"} 2023-07-12 19:17:32,492 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=9511f015ded348ca2f3336292aef0f96, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,491 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=a51addfadca1ca5f6216cfc7b8179683, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:32,492 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452491"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452491"}]},"ts":"1689189452491"} 2023-07-12 19:17:32,492 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452491"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189452491"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189452491"}]},"ts":"1689189452491"} 2023-07-12 19:17:32,493 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=143, state=RUNNABLE; CloseRegionProcedure 0f907f22115b2f6fe3cbd1c88d6b3459, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:32,494 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=144, state=RUNNABLE; CloseRegionProcedure 52c51fc6cd79cc275857ddc3b19788df, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:32,495 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=141, state=RUNNABLE; CloseRegionProcedure 1de1cf82dbd8b232c0d007754e8b5579, server=jenkins-hbase20.apache.org,43021,1689189426641}] 2023-07-12 19:17:32,497 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure 9511f015ded348ca2f3336292aef0f96, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:32,497 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=142, state=RUNNABLE; CloseRegionProcedure a51addfadca1ca5f6216cfc7b8179683, server=jenkins-hbase20.apache.org,39963,1689189426501}] 2023-07-12 19:17:32,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 19:17:32,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 9511f015ded348ca2f3336292aef0f96, disabling compactions & flushes 2023-07-12 19:17:32,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 52c51fc6cd79cc275857ddc3b19788df, disabling compactions & flushes 2023-07-12 19:17:32,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:32,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:32,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:32,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:32,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. after waiting 0 ms 2023-07-12 19:17:32,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:32,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. after waiting 0 ms 2023-07-12 19:17:32,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:32,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:32,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:32,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df. 2023-07-12 19:17:32,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 52c51fc6cd79cc275857ddc3b19788df: 2023-07-12 19:17:32,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96. 2023-07-12 19:17:32,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 9511f015ded348ca2f3336292aef0f96: 2023-07-12 19:17:32,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1de1cf82dbd8b232c0d007754e8b5579, disabling compactions & flushes 2023-07-12 19:17:32,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:32,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:32,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. after waiting 0 ms 2023-07-12 19:17:32,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:32,655 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=52c51fc6cd79cc275857ddc3b19788df, regionState=CLOSED 2023-07-12 19:17:32,655 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452655"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189452655"}]},"ts":"1689189452655"} 2023-07-12 19:17:32,655 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing a51addfadca1ca5f6216cfc7b8179683, disabling compactions & flushes 2023-07-12 19:17:32,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:32,657 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=9511f015ded348ca2f3336292aef0f96, regionState=CLOSED 2023-07-12 19:17:32,657 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189452657"}]},"ts":"1689189452657"} 2023-07-12 19:17:32,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:32,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. after waiting 0 ms 2023-07-12 19:17:32,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:32,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=144 2023-07-12 19:17:32,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=144, state=SUCCESS; CloseRegionProcedure 52c51fc6cd79cc275857ddc3b19788df, server=jenkins-hbase20.apache.org,43021,1689189426641 in 163 msec 2023-07-12 19:17:32,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-12 19:17:32,660 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=52c51fc6cd79cc275857ddc3b19788df, UNASSIGN in 171 msec 2023-07-12 19:17:32,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure 9511f015ded348ca2f3336292aef0f96, server=jenkins-hbase20.apache.org,39963,1689189426501 in 162 msec 2023-07-12 19:17:32,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:32,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:32,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579. 2023-07-12 19:17:32,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1de1cf82dbd8b232c0d007754e8b5579: 2023-07-12 19:17:32,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683. 2023-07-12 19:17:32,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for a51addfadca1ca5f6216cfc7b8179683: 2023-07-12 19:17:32,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9511f015ded348ca2f3336292aef0f96, UNASSIGN in 172 msec 2023-07-12 19:17:32,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,668 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=1de1cf82dbd8b232c0d007754e8b5579, regionState=CLOSED 2023-07-12 19:17:32,668 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689189452668"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189452668"}]},"ts":"1689189452668"} 2023-07-12 19:17:32,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0f907f22115b2f6fe3cbd1c88d6b3459, disabling compactions & flushes 2023-07-12 19:17:32,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:32,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:32,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. after waiting 0 ms 2023-07-12 19:17:32,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:32,671 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=a51addfadca1ca5f6216cfc7b8179683, regionState=CLOSED 2023-07-12 19:17:32,671 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452671"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189452671"}]},"ts":"1689189452671"} 2023-07-12 19:17:32,673 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=141 2023-07-12 19:17:32,673 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=141, state=SUCCESS; CloseRegionProcedure 1de1cf82dbd8b232c0d007754e8b5579, server=jenkins-hbase20.apache.org,43021,1689189426641 in 176 msec 2023-07-12 19:17:32,674 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=142 2023-07-12 19:17:32,674 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=142, state=SUCCESS; CloseRegionProcedure a51addfadca1ca5f6216cfc7b8179683, server=jenkins-hbase20.apache.org,39963,1689189426501 in 175 msec 2023-07-12 19:17:32,675 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1de1cf82dbd8b232c0d007754e8b5579, UNASSIGN in 185 msec 2023-07-12 19:17:32,675 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a51addfadca1ca5f6216cfc7b8179683, UNASSIGN in 186 msec 2023-07-12 19:17:32,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:32,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459. 2023-07-12 19:17:32,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0f907f22115b2f6fe3cbd1c88d6b3459: 2023-07-12 19:17:32,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,682 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=0f907f22115b2f6fe3cbd1c88d6b3459, regionState=CLOSED 2023-07-12 19:17:32,682 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689189452682"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189452682"}]},"ts":"1689189452682"} 2023-07-12 19:17:32,685 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=143 2023-07-12 19:17:32,686 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; CloseRegionProcedure 0f907f22115b2f6fe3cbd1c88d6b3459, server=jenkins-hbase20.apache.org,39963,1689189426501 in 190 msec 2023-07-12 19:17:32,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=140 2023-07-12 19:17:32,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f907f22115b2f6fe3cbd1c88d6b3459, UNASSIGN in 197 msec 2023-07-12 19:17:32,687 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189452687"}]},"ts":"1689189452687"} 2023-07-12 19:17:32,688 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-12 19:17:32,689 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-12 19:17:32,691 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 209 msec 2023-07-12 19:17:32,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-12 19:17:32,787 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 140 completed 2023-07-12 19:17:32,788 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_151152376 2023-07-12 19:17:32,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_151152376 2023-07-12 19:17:32,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_151152376 2023-07-12 19:17:32,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:32,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:32,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:32,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-12 19:17:32,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_151152376, current retry=0 2023-07-12 19:17:32,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_151152376. 2023-07-12 19:17:32,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:32,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:32,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:32,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-12 19:17:32,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:32,800 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-12 19:17:32,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-12 19:17:32,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:32,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 89 connection: 148.251.75.209:37696 deadline: 1689189512801, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-12 19:17:32,802 DEBUG [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-12 19:17:32,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testDisabledTableMove 2023-07-12 19:17:32,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] procedure2.ProcedureExecutor(1029): Stored pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:32,807 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:32,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_151152376' 2023-07-12 19:17:32,815 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=152, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:32,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_151152376 2023-07-12 19:17:32,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:32,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:32,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:32,824 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,824 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,824 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,824 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,824 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,827 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/recovered.edits] 2023-07-12 19:17:32,827 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/recovered.edits] 2023-07-12 19:17:32,827 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/recovered.edits] 2023-07-12 19:17:32,827 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/recovered.edits] 2023-07-12 19:17:32,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-12 19:17:32,828 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/f, FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/recovered.edits] 2023-07-12 19:17:32,835 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df/recovered.edits/4.seqid 2023-07-12 19:17:32,835 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459/recovered.edits/4.seqid 2023-07-12 19:17:32,836 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96/recovered.edits/4.seqid 2023-07-12 19:17:32,836 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/52c51fc6cd79cc275857ddc3b19788df 2023-07-12 19:17:32,836 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/0f907f22115b2f6fe3cbd1c88d6b3459 2023-07-12 19:17:32,836 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579/recovered.edits/4.seqid 2023-07-12 19:17:32,836 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/recovered.edits/4.seqid to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/archive/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683/recovered.edits/4.seqid 2023-07-12 19:17:32,837 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/9511f015ded348ca2f3336292aef0f96 2023-07-12 19:17:32,837 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/1de1cf82dbd8b232c0d007754e8b5579 2023-07-12 19:17:32,837 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/.tmp/data/default/Group_testDisabledTableMove/a51addfadca1ca5f6216cfc7b8179683 2023-07-12 19:17:32,837 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-12 19:17:32,839 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=152, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:32,841 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-12 19:17:32,848 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-12 19:17:32,849 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=152, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:32,849 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-12 19:17:32,849 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189452849"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:32,849 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189452849"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:32,850 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189452849"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:32,850 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189452849"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:32,850 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189452849"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:32,852 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-12 19:17:32,852 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1de1cf82dbd8b232c0d007754e8b5579, NAME => 'Group_testDisabledTableMove,,1689189451855.1de1cf82dbd8b232c0d007754e8b5579.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => a51addfadca1ca5f6216cfc7b8179683, NAME => 'Group_testDisabledTableMove,aaaaa,1689189451855.a51addfadca1ca5f6216cfc7b8179683.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 0f907f22115b2f6fe3cbd1c88d6b3459, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689189451855.0f907f22115b2f6fe3cbd1c88d6b3459.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 52c51fc6cd79cc275857ddc3b19788df, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689189451855.52c51fc6cd79cc275857ddc3b19788df.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9511f015ded348ca2f3336292aef0f96, NAME => 'Group_testDisabledTableMove,zzzzz,1689189451855.9511f015ded348ca2f3336292aef0f96.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-12 19:17:32,852 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-12 19:17:32,852 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189452852"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:32,853 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-12 19:17:32,855 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=152, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-12 19:17:32,856 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=152, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 52 msec 2023-07-12 19:17:32,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-12 19:17:32,929 INFO [Listener at localhost.localdomain/34239] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 152 completed 2023-07-12 19:17:32,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:32,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:32,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:32,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:32,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:32,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571] to rsgroup default 2023-07-12 19:17:32,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_151152376 2023-07-12 19:17:32,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:32,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:32,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:32,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_151152376, current retry=0 2023-07-12 19:17:32,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,36311,1689189430768, jenkins-hbase20.apache.org,36571,1689189426727] are moved back to Group_testDisabledTableMove_151152376 2023-07-12 19:17:32,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_151152376 => default 2023-07-12 19:17:32,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:32,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testDisabledTableMove_151152376 2023-07-12 19:17:32,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:32,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:32,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:32,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:32,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:32,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:32,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:32,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:32,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:32,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:32,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:32,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:32,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:32,961 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:32,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:32,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:32,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:32,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:32,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:32,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:32,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:32,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:32,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:32,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190652981, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:32,981 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:32,983 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:32,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:32,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:32,984 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:32,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:32,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:32,989 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-12 19:17:33,002 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=504 (was 504), OpenFileDescriptor=767 (was 747) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=494 (was 494), ProcessCount=170 (was 173), AvailableMemoryMB=3870 (was 3666) - AvailableMemoryMB LEAK? - 2023-07-12 19:17:33,003 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 19:17:33,021 INFO [Listener at localhost.localdomain/34239] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=504, OpenFileDescriptor=767, MaxFileDescriptor=60000, SystemLoadAverage=494, ProcessCount=170, AvailableMemoryMB=3869 2023-07-12 19:17:33,021 WARN [Listener at localhost.localdomain/34239] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-12 19:17:33,021 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-12 19:17:33,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:33,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:33,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:33,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:33,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:33,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:33,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:33,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:33,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:33,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:33,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:33,034 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:33,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:33,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:33,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:33,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:33,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:33,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:33,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:33,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33033] to rsgroup master 2023-07-12 19:17:33,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:33,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:37696 deadline: 1689190653044, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. 2023-07-12 19:17:33,044 WARN [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33033 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:33,046 INFO [Listener at localhost.localdomain/34239] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:33,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:33,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:33,047 INFO [Listener at localhost.localdomain/34239] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:36311, jenkins-hbase20.apache.org:36571, jenkins-hbase20.apache.org:39963, jenkins-hbase20.apache.org:43021], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:33,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:33,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33033] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:33,048 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 19:17:33,048 INFO [Listener at localhost.localdomain/34239] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 19:17:33,049 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4683d6ee to 127.0.0.1:52922 2023-07-12 19:17:33,049 DEBUG [Listener at localhost.localdomain/34239] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,049 DEBUG [Listener at localhost.localdomain/34239] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 19:17:33,050 DEBUG [Listener at localhost.localdomain/34239] util.JVMClusterUtil(257): Found active master hash=1322375203, stopped=false 2023-07-12 19:17:33,050 DEBUG [Listener at localhost.localdomain/34239] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 19:17:33,050 DEBUG [Listener at localhost.localdomain/34239] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 19:17:33,050 INFO [Listener at localhost.localdomain/34239] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:33,052 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:33,052 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:33,052 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:33,052 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:33,052 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:33,052 INFO [Listener at localhost.localdomain/34239] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 19:17:33,053 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:33,053 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:33,053 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:33,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:33,054 DEBUG [Listener at localhost.localdomain/34239] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1c69f937 to 127.0.0.1:52922 2023-07-12 19:17:33,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:33,054 DEBUG [Listener at localhost.localdomain/34239] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,054 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,39963,1689189426501' ***** 2023-07-12 19:17:33,055 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:33,055 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,43021,1689189426641' ***** 2023-07-12 19:17:33,055 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:33,055 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:33,055 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,36571,1689189426727' ***** 2023-07-12 19:17:33,055 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:33,057 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:33,055 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:33,062 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,36311,1689189430768' ***** 2023-07-12 19:17:33,064 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:33,064 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:33,064 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:33,064 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,062 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:33,064 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,064 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,070 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-12 19:17:33,070 INFO [regionserver/jenkins-hbase20:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-12 19:17:33,064 INFO [Listener at localhost.localdomain/34239] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:33,072 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:33,080 INFO [RS:3;jenkins-hbase20:36311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4d30d5bf{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:33,080 INFO [RS:1;jenkins-hbase20:43021] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@76231043{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:33,080 INFO [RS:0;jenkins-hbase20:39963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@50272e17{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:33,080 INFO [RS:2;jenkins-hbase20:36571] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2c66c00c{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:33,085 INFO [RS:1;jenkins-hbase20:43021] server.AbstractConnector(383): Stopped ServerConnector@442ee3d1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:33,085 INFO [RS:0;jenkins-hbase20:39963] server.AbstractConnector(383): Stopped ServerConnector@6fa562a1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:33,086 INFO [RS:1;jenkins-hbase20:43021] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:33,086 INFO [RS:0;jenkins-hbase20:39963] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:33,088 INFO [RS:1;jenkins-hbase20:43021] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50be8b8f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:33,088 INFO [RS:0;jenkins-hbase20:39963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@77d4261c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:33,089 INFO [RS:1;jenkins-hbase20:43021] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@93eeee4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:33,090 INFO [RS:0;jenkins-hbase20:39963] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@146a5e1f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:33,091 INFO [RS:2;jenkins-hbase20:36571] server.AbstractConnector(383): Stopped ServerConnector@4ba91794{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:33,091 INFO [RS:3;jenkins-hbase20:36311] server.AbstractConnector(383): Stopped ServerConnector@29e86b8d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:33,091 INFO [RS:2;jenkins-hbase20:36571] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:33,091 INFO [RS:3;jenkins-hbase20:36311] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:33,092 INFO [RS:2;jenkins-hbase20:36571] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2937f1b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:33,092 INFO [RS:3;jenkins-hbase20:36311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41054beb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:33,094 INFO [RS:3;jenkins-hbase20:36311] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b624c2b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:33,094 INFO [RS:2;jenkins-hbase20:36571] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70301262{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:33,095 INFO [RS:2;jenkins-hbase20:36571] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:33,095 INFO [RS:2;jenkins-hbase20:36571] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:33,095 INFO [RS:2;jenkins-hbase20:36571] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:33,096 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:33,096 DEBUG [RS:2;jenkins-hbase20:36571] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x370a8bb1 to 127.0.0.1:52922 2023-07-12 19:17:33,096 DEBUG [RS:2;jenkins-hbase20:36571] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,096 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36571,1689189426727; all regions closed. 2023-07-12 19:17:33,096 INFO [RS:0;jenkins-hbase20:39963] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:33,096 INFO [RS:0;jenkins-hbase20:39963] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:33,096 INFO [RS:0;jenkins-hbase20:39963] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:33,096 INFO [RS:3;jenkins-hbase20:36311] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:33,096 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(3305): Received CLOSE for aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:33,096 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:33,097 INFO [RS:3;jenkins-hbase20:36311] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:33,097 INFO [RS:3;jenkins-hbase20:36311] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:33,096 INFO [RS:1;jenkins-hbase20:43021] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:33,097 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:33,097 INFO [RS:1;jenkins-hbase20:43021] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:33,097 DEBUG [RS:3;jenkins-hbase20:36311] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3a975367 to 127.0.0.1:52922 2023-07-12 19:17:33,097 INFO [RS:1;jenkins-hbase20:43021] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:33,097 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(3305): Received CLOSE for 396ab33375d72981083bc36f18ff15d4 2023-07-12 19:17:33,097 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(3305): Received CLOSE for 80f898828c5a9814a93d19dfb7ad9318 2023-07-12 19:17:33,097 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(3305): Received CLOSE for 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:33,097 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:33,097 DEBUG [RS:1;jenkins-hbase20:43021] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x736bb31c to 127.0.0.1:52922 2023-07-12 19:17:33,097 DEBUG [RS:1;jenkins-hbase20:43021] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,097 INFO [RS:1;jenkins-hbase20:43021] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:33,098 INFO [RS:1;jenkins-hbase20:43021] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:33,098 INFO [RS:1;jenkins-hbase20:43021] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:33,098 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 19:17:33,097 DEBUG [RS:3;jenkins-hbase20:36311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing aeeb3efc5a8573e6eca018aeb06a2077, disabling compactions & flushes 2023-07-12 19:17:33,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 396ab33375d72981083bc36f18ff15d4, disabling compactions & flushes 2023-07-12 19:17:33,097 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:33,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:33,099 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-12 19:17:33,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:33,099 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1478): Online Regions={396ab33375d72981083bc36f18ff15d4=hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4., 80f898828c5a9814a93d19dfb7ad9318=hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318., 845df8e2a52065b03d70b26a7a732653=unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653., 1588230740=hbase:meta,,1.1588230740} 2023-07-12 19:17:33,099 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36311,1689189430768; all regions closed. 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:33,099 DEBUG [RS:0;jenkins-hbase20:39963] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b93cbc6 to 127.0.0.1:52922 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. after waiting 0 ms 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. after waiting 0 ms 2023-07-12 19:17:33,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:33,099 DEBUG [RS:0;jenkins-hbase20:39963] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,100 DEBUG [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1504): Waiting on 1588230740, 396ab33375d72981083bc36f18ff15d4, 80f898828c5a9814a93d19dfb7ad9318, 845df8e2a52065b03d70b26a7a732653 2023-07-12 19:17:33,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 19:17:33,100 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 19:17:33,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 396ab33375d72981083bc36f18ff15d4 1/1 column families, dataSize=28.79 KB heapSize=47.27 KB 2023-07-12 19:17:33,100 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1478): Online Regions={aeeb3efc5a8573e6eca018aeb06a2077=testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077.} 2023-07-12 19:17:33,100 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 19:17:33,100 DEBUG [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1504): Waiting on aeeb3efc5a8573e6eca018aeb06a2077 2023-07-12 19:17:33,100 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=79.57 KB heapSize=125.54 KB 2023-07-12 19:17:33,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/testRename/aeeb3efc5a8573e6eca018aeb06a2077/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 19:17:33,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:33,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for aeeb3efc5a8573e6eca018aeb06a2077: 2023-07-12 19:17:33,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689189445627.aeeb3efc5a8573e6eca018aeb06a2077. 2023-07-12 19:17:33,120 DEBUG [RS:3;jenkins-hbase20:36311] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs 2023-07-12 19:17:33,120 INFO [RS:3;jenkins-hbase20:36311] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C36311%2C1689189430768:(num 1689189431238) 2023-07-12 19:17:33,120 DEBUG [RS:3;jenkins-hbase20:36311] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,120 DEBUG [RS:2;jenkins-hbase20:36571] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs 2023-07-12 19:17:33,120 INFO [RS:2;jenkins-hbase20:36571] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C36571%2C1689189426727:(num 1689189428954) 2023-07-12 19:17:33,120 DEBUG [RS:2;jenkins-hbase20:36571] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,121 INFO [RS:2;jenkins-hbase20:36571] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,120 INFO [RS:3;jenkins-hbase20:36311] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,123 INFO [RS:3;jenkins-hbase20:36311] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:33,123 INFO [RS:3;jenkins-hbase20:36311] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:33,123 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:33,123 INFO [RS:2;jenkins-hbase20:36571] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:33,123 INFO [RS:3;jenkins-hbase20:36311] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:33,123 INFO [RS:3;jenkins-hbase20:36311] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:33,124 INFO [RS:2;jenkins-hbase20:36571] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:33,124 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:33,124 INFO [RS:2;jenkins-hbase20:36571] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:33,125 INFO [RS:3;jenkins-hbase20:36311] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36311 2023-07-12 19:17:33,125 INFO [RS:2;jenkins-hbase20:36571] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:33,127 INFO [RS:2;jenkins-hbase20:36571] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36571 2023-07-12 19:17:33,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.79 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/.tmp/m/68cc31c3bf1a4136b3b2d2ad34c9df27 2023-07-12 19:17:33,159 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 68cc31c3bf1a4136b3b2d2ad34c9df27 2023-07-12 19:17:33,161 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/.tmp/m/68cc31c3bf1a4136b3b2d2ad34c9df27 as hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/m/68cc31c3bf1a4136b3b2d2ad34c9df27 2023-07-12 19:17:33,161 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=73.59 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/.tmp/info/b8234558337e4cce9d50b5386e3f4f9b 2023-07-12 19:17:33,163 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,169 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b8234558337e4cce9d50b5386e3f4f9b 2023-07-12 19:17:33,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 68cc31c3bf1a4136b3b2d2ad34c9df27 2023-07-12 19:17:33,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/m/68cc31c3bf1a4136b3b2d2ad34c9df27, entries=28, sequenceid=95, filesize=6.1 K 2023-07-12 19:17:33,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.79 KB/29478, heapSize ~47.25 KB/48384, currentSize=0 B/0 for 396ab33375d72981083bc36f18ff15d4 in 72ms, sequenceid=95, compaction requested=false 2023-07-12 19:17:33,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/rsgroup/396ab33375d72981083bc36f18ff15d4/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-12 19:17:33,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:33,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:33,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 396ab33375d72981083bc36f18ff15d4: 2023-07-12 19:17:33,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689189429409.396ab33375d72981083bc36f18ff15d4. 2023-07-12 19:17:33,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 80f898828c5a9814a93d19dfb7ad9318, disabling compactions & flushes 2023-07-12 19:17:33,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:33,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:33,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. after waiting 0 ms 2023-07-12 19:17:33,190 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:33,203 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:33,204 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:33,204 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:33,204 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:33,204 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:33,204 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:33,204 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36311,1689189430768 2023-07-12 19:17:33,204 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:33,207 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:33,207 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:33,207 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/.tmp/rep_barrier/d20c31cb7d8e47979aea393aa25ab3d1 2023-07-12 19:17:33,208 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:33,207 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:33,207 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36571,1689189426727 2023-07-12 19:17:33,210 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36311,1689189430768] 2023-07-12 19:17:33,211 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36311,1689189430768; numProcessing=1 2023-07-12 19:17:33,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/namespace/80f898828c5a9814a93d19dfb7ad9318/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-12 19:17:33,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:33,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 80f898828c5a9814a93d19dfb7ad9318: 2023-07-12 19:17:33,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689189429517.80f898828c5a9814a93d19dfb7ad9318. 2023-07-12 19:17:33,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 845df8e2a52065b03d70b26a7a732653, disabling compactions & flushes 2023-07-12 19:17:33,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:33,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:33,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. after waiting 0 ms 2023-07-12 19:17:33,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:33,219 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d20c31cb7d8e47979aea393aa25ab3d1 2023-07-12 19:17:33,220 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/default/unmovedTable/845df8e2a52065b03d70b26a7a732653/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-12 19:17:33,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:33,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 845df8e2a52065b03d70b26a7a732653: 2023-07-12 19:17:33,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689189447319.845df8e2a52065b03d70b26a7a732653. 2023-07-12 19:17:33,222 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36311,1689189430768 already deleted, retry=false 2023-07-12 19:17:33,222 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36311,1689189430768 expired; onlineServers=3 2023-07-12 19:17:33,222 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36571,1689189426727] 2023-07-12 19:17:33,222 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36571,1689189426727; numProcessing=2 2023-07-12 19:17:33,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=204 (bloomFilter=false), to=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/.tmp/table/1486e31292a749299292b8ba6907b2e8 2023-07-12 19:17:33,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1486e31292a749299292b8ba6907b2e8 2023-07-12 19:17:33,244 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/.tmp/info/b8234558337e4cce9d50b5386e3f4f9b as hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/info/b8234558337e4cce9d50b5386e3f4f9b 2023-07-12 19:17:33,251 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b8234558337e4cce9d50b5386e3f4f9b 2023-07-12 19:17:33,251 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/info/b8234558337e4cce9d50b5386e3f4f9b, entries=100, sequenceid=204, filesize=16.3 K 2023-07-12 19:17:33,252 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/.tmp/rep_barrier/d20c31cb7d8e47979aea393aa25ab3d1 as hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/rep_barrier/d20c31cb7d8e47979aea393aa25ab3d1 2023-07-12 19:17:33,258 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d20c31cb7d8e47979aea393aa25ab3d1 2023-07-12 19:17:33,258 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/rep_barrier/d20c31cb7d8e47979aea393aa25ab3d1, entries=18, sequenceid=204, filesize=6.9 K 2023-07-12 19:17:33,259 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/.tmp/table/1486e31292a749299292b8ba6907b2e8 as hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/table/1486e31292a749299292b8ba6907b2e8 2023-07-12 19:17:33,265 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1486e31292a749299292b8ba6907b2e8 2023-07-12 19:17:33,265 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/table/1486e31292a749299292b8ba6907b2e8, entries=31, sequenceid=204, filesize=7.4 K 2023-07-12 19:17:33,266 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~79.57 KB/81484, heapSize ~125.49 KB/128504, currentSize=0 B/0 for 1588230740 in 166ms, sequenceid=204, compaction requested=false 2023-07-12 19:17:33,276 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/data/hbase/meta/1588230740/recovered.edits/207.seqid, newMaxSeqId=207, maxSeqId=1 2023-07-12 19:17:33,277 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:33,278 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:33,278 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 19:17:33,278 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:33,300 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43021,1689189426641; all regions closed. 2023-07-12 19:17:33,300 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39963,1689189426501; all regions closed. 2023-07-12 19:17:33,310 DEBUG [RS:1;jenkins-hbase20:43021] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs 2023-07-12 19:17:33,310 INFO [RS:1;jenkins-hbase20:43021] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C43021%2C1689189426641.meta:.meta(num 1689189429128) 2023-07-12 19:17:33,310 DEBUG [RS:0;jenkins-hbase20:39963] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs 2023-07-12 19:17:33,311 INFO [RS:0;jenkins-hbase20:39963] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C39963%2C1689189426501:(num 1689189428954) 2023-07-12 19:17:33,311 DEBUG [RS:0;jenkins-hbase20:39963] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,311 INFO [RS:0;jenkins-hbase20:39963] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,311 INFO [RS:0;jenkins-hbase20:39963] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:33,311 INFO [RS:0;jenkins-hbase20:39963] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:33,311 INFO [RS:0;jenkins-hbase20:39963] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:33,311 INFO [RS:0;jenkins-hbase20:39963] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:33,312 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:33,312 INFO [RS:0;jenkins-hbase20:39963] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39963 2023-07-12 19:17:33,320 INFO [RS:2;jenkins-hbase20:36571] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36571,1689189426727; zookeeper connection closed. 2023-07-12 19:17:33,320 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,320 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36571-0x100829d951f0003, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,320 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3198cfd] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3198cfd 2023-07-12 19:17:33,322 DEBUG [RS:1;jenkins-hbase20:43021] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/oldWALs 2023-07-12 19:17:33,322 INFO [RS:1;jenkins-hbase20:43021] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C43021%2C1689189426641:(num 1689189428954) 2023-07-12 19:17:33,322 DEBUG [RS:1;jenkins-hbase20:43021] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,322 INFO [RS:1;jenkins-hbase20:43021] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:33,322 INFO [RS:1;jenkins-hbase20:43021] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:33,322 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36571,1689189426727 already deleted, retry=false 2023-07-12 19:17:33,322 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:33,323 INFO [RS:1;jenkins-hbase20:43021] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43021 2023-07-12 19:17:33,322 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:33,322 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36571,1689189426727 expired; onlineServers=2 2023-07-12 19:17:33,322 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,39963,1689189426501 2023-07-12 19:17:33,322 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:33,327 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,39963,1689189426501] 2023-07-12 19:17:33,327 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,39963,1689189426501; numProcessing=3 2023-07-12 19:17:33,425 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,425 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:39963-0x100829d951f0001, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,426 INFO [RS:0;jenkins-hbase20:39963] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39963,1689189426501; zookeeper connection closed. 2023-07-12 19:17:33,426 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:33,426 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43021,1689189426641 2023-07-12 19:17:33,426 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,39963,1689189426501 already deleted, retry=false 2023-07-12 19:17:33,426 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,39963,1689189426501 expired; onlineServers=1 2023-07-12 19:17:33,427 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43021,1689189426641] 2023-07-12 19:17:33,427 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43021,1689189426641; numProcessing=4 2023-07-12 19:17:33,428 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43021,1689189426641 already deleted, retry=false 2023-07-12 19:17:33,428 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43021,1689189426641 expired; onlineServers=0 2023-07-12 19:17:33,428 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,33033,1689189424308' ***** 2023-07-12 19:17:33,428 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 19:17:33,428 DEBUG [M:0;jenkins-hbase20:33033] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@fecd218, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:33,428 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:33,434 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@f17dce0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@f17dce0 2023-07-12 19:17:33,436 INFO [M:0;jenkins-hbase20:33033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@427e7903{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 19:17:33,437 INFO [M:0;jenkins-hbase20:33033] server.AbstractConnector(383): Stopped ServerConnector@7766b5d1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:33,437 INFO [M:0;jenkins-hbase20:33033] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:33,438 INFO [M:0;jenkins-hbase20:33033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38ff9bc9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:33,439 INFO [M:0;jenkins-hbase20:33033] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@12d9dc59{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:33,439 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33033,1689189424308 2023-07-12 19:17:33,439 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33033,1689189424308; all regions closed. 2023-07-12 19:17:33,439 DEBUG [M:0;jenkins-hbase20:33033] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:33,439 INFO [M:0;jenkins-hbase20:33033] master.HMaster(1491): Stopping master jetty server 2023-07-12 19:17:33,440 INFO [M:0;jenkins-hbase20:33033] server.AbstractConnector(383): Stopped ServerConnector@4805be52{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:33,440 DEBUG [M:0;jenkins-hbase20:33033] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 19:17:33,441 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 19:17:33,441 DEBUG [M:0;jenkins-hbase20:33033] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 19:17:33,441 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189428342] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189428342,5,FailOnTimeoutGroup] 2023-07-12 19:17:33,441 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189428347] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189428347,5,FailOnTimeoutGroup] 2023-07-12 19:17:33,441 INFO [M:0;jenkins-hbase20:33033] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 19:17:33,441 INFO [M:0;jenkins-hbase20:33033] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 19:17:33,441 INFO [M:0;jenkins-hbase20:33033] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-12 19:17:33,441 DEBUG [M:0;jenkins-hbase20:33033] master.HMaster(1512): Stopping service threads 2023-07-12 19:17:33,441 INFO [M:0;jenkins-hbase20:33033] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 19:17:33,441 ERROR [M:0;jenkins-hbase20:33033] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-12 19:17:33,442 INFO [M:0;jenkins-hbase20:33033] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 19:17:33,442 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 19:17:33,443 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:33,443 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:33,443 DEBUG [M:0;jenkins-hbase20:33033] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-07-12 19:17:33,443 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:33,443 DEBUG [M:0;jenkins-hbase20:33033] master.ActiveMasterManager(335): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-07-12 19:17:33,443 INFO [M:0;jenkins-hbase20:33033] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 19:17:33,444 INFO [M:0;jenkins-hbase20:33033] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 19:17:33,444 DEBUG [M:0;jenkins-hbase20:33033] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 19:17:33,444 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:33,444 DEBUG [M:0;jenkins-hbase20:33033] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:33,444 DEBUG [M:0;jenkins-hbase20:33033] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 19:17:33,444 DEBUG [M:0;jenkins-hbase20:33033] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:33,444 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=510.79 KB heapSize=611.08 KB 2023-07-12 19:17:33,466 INFO [M:0;jenkins-hbase20:33033] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=510.79 KB at sequenceid=1128 (bloomFilter=true), to=hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7bf3819583dd4ed7b1b7faec6df029b7 2023-07-12 19:17:33,472 DEBUG [M:0;jenkins-hbase20:33033] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7bf3819583dd4ed7b1b7faec6df029b7 as hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7bf3819583dd4ed7b1b7faec6df029b7 2023-07-12 19:17:33,479 INFO [M:0;jenkins-hbase20:33033] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7bf3819583dd4ed7b1b7faec6df029b7, entries=151, sequenceid=1128, filesize=26.7 K 2023-07-12 19:17:33,480 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegion(2948): Finished flush of dataSize ~510.79 KB/523044, heapSize ~611.06 KB/625728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 36ms, sequenceid=1128, compaction requested=false 2023-07-12 19:17:33,483 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:33,483 DEBUG [M:0;jenkins-hbase20:33033] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:33,491 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:33,491 INFO [M:0;jenkins-hbase20:33033] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 19:17:33,492 INFO [M:0;jenkins-hbase20:33033] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33033 2023-07-12 19:17:33,493 DEBUG [M:0;jenkins-hbase20:33033] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,33033,1689189424308 already deleted, retry=false 2023-07-12 19:17:33,653 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,653 INFO [M:0;jenkins-hbase20:33033] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33033,1689189424308; zookeeper connection closed. 2023-07-12 19:17:33,653 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): master:33033-0x100829d951f0000, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,753 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,753 INFO [RS:1;jenkins-hbase20:43021] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43021,1689189426641; zookeeper connection closed. 2023-07-12 19:17:33,753 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:43021-0x100829d951f0002, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,755 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7964142] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7964142 2023-07-12 19:17:33,853 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,853 INFO [RS:3;jenkins-hbase20:36311] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36311,1689189430768; zookeeper connection closed. 2023-07-12 19:17:33,853 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): regionserver:36311-0x100829d951f000b, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:33,854 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7a0cc666] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7a0cc666 2023-07-12 19:17:33,854 INFO [Listener at localhost.localdomain/34239] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 19:17:33,855 WARN [Listener at localhost.localdomain/34239] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:33,863 INFO [Listener at localhost.localdomain/34239] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:33,863 WARN [417305173@qtp-421994835-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40933] http.HttpServer2$SelectChannelConnectorWithSafeStartup(546): HttpServer Acceptor: isRunning is false. Rechecking. 2023-07-12 19:17:33,863 WARN [417305173@qtp-421994835-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40933] http.HttpServer2$SelectChannelConnectorWithSafeStartup(555): HttpServer Acceptor: isRunning is false 2023-07-12 19:17:33,968 WARN [BP-1227025609-148.251.75.209-1689189420190 heartbeating to localhost.localdomain/127.0.0.1:43233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:33,968 WARN [BP-1227025609-148.251.75.209-1689189420190 heartbeating to localhost.localdomain/127.0.0.1:43233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1227025609-148.251.75.209-1689189420190 (Datanode Uuid d9d04309-0b99-409e-9f32-2d8a3498b1b1) service to localhost.localdomain/127.0.0.1:43233 2023-07-12 19:17:33,971 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data5/current/BP-1227025609-148.251.75.209-1689189420190] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:33,972 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data6/current/BP-1227025609-148.251.75.209-1689189420190] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:33,977 WARN [Listener at localhost.localdomain/34239] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:33,982 INFO [Listener at localhost.localdomain/34239] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:34,085 WARN [BP-1227025609-148.251.75.209-1689189420190 heartbeating to localhost.localdomain/127.0.0.1:43233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:34,085 WARN [BP-1227025609-148.251.75.209-1689189420190 heartbeating to localhost.localdomain/127.0.0.1:43233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1227025609-148.251.75.209-1689189420190 (Datanode Uuid f2bdddcf-6c44-494a-8b6c-0f3758698d6d) service to localhost.localdomain/127.0.0.1:43233 2023-07-12 19:17:34,086 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data3/current/BP-1227025609-148.251.75.209-1689189420190] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:34,086 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data4/current/BP-1227025609-148.251.75.209-1689189420190] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:34,088 WARN [Listener at localhost.localdomain/34239] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:34,090 INFO [Listener at localhost.localdomain/34239] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:34,194 WARN [BP-1227025609-148.251.75.209-1689189420190 heartbeating to localhost.localdomain/127.0.0.1:43233] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:34,194 WARN [BP-1227025609-148.251.75.209-1689189420190 heartbeating to localhost.localdomain/127.0.0.1:43233] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1227025609-148.251.75.209-1689189420190 (Datanode Uuid 8f687cea-8c39-4290-9745-8ce95d46083e) service to localhost.localdomain/127.0.0.1:43233 2023-07-12 19:17:34,195 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data1/current/BP-1227025609-148.251.75.209-1689189420190] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:34,196 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/cluster_71dbf4f1-3f31-d11c-63a5-d05d19764ad1/dfs/data/data2/current/BP-1227025609-148.251.75.209-1689189420190] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:34,243 INFO [Listener at localhost.localdomain/34239] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-12 19:17:34,359 INFO [Listener at localhost.localdomain/34239] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 19:17:34,427 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 19:17:34,427 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 19:17:34,427 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.log.dir so I do NOT create it in target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d 2023-07-12 19:17:34,427 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/29ea73fb-101e-b512-aded-a1ff34bb26e9/hadoop.tmp.dir so I do NOT create it in target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d 2023-07-12 19:17:34,427 INFO [Listener at localhost.localdomain/34239] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a, deleteOnExit=true 2023-07-12 19:17:34,427 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 19:17:34,428 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/test.cache.data in system properties and HBase conf 2023-07-12 19:17:34,428 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 19:17:34,428 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir in system properties and HBase conf 2023-07-12 19:17:34,428 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 19:17:34,428 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 19:17:34,429 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 19:17:34,429 DEBUG [Listener at localhost.localdomain/34239] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 19:17:34,429 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 19:17:34,429 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 19:17:34,429 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 19:17:34,429 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 19:17:34,429 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/nfs.dump.dir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/java.io.tmpdir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 19:17:34,430 INFO [Listener at localhost.localdomain/34239] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 19:17:34,433 WARN [Listener at localhost.localdomain/34239] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 19:17:34,434 WARN [Listener at localhost.localdomain/34239] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 19:17:34,457 DEBUG [Listener at localhost.localdomain/34239-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x100829d951f000a, quorum=127.0.0.1:52922, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 19:17:34,457 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x100829d951f000a, quorum=127.0.0.1:52922, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 19:17:34,474 WARN [Listener at localhost.localdomain/34239] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:34,478 INFO [Listener at localhost.localdomain/34239] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:34,487 INFO [Listener at localhost.localdomain/34239] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/java.io.tmpdir/Jetty_localhost_localdomain_36789_hdfs____.d1wttd/webapp 2023-07-12 19:17:34,543 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:34,543 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 19:17:34,543 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 19:17:34,573 INFO [Listener at localhost.localdomain/34239] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36789 2023-07-12 19:17:34,577 WARN [Listener at localhost.localdomain/34239] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 19:17:34,577 WARN [Listener at localhost.localdomain/34239] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 19:17:34,617 WARN [Listener at localhost.localdomain/38007] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:34,631 WARN [Listener at localhost.localdomain/38007] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:34,633 WARN [Listener at localhost.localdomain/38007] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:34,634 INFO [Listener at localhost.localdomain/38007] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:34,641 INFO [Listener at localhost.localdomain/38007] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/java.io.tmpdir/Jetty_localhost_36801_datanode____.7cwi03/webapp 2023-07-12 19:17:34,718 INFO [Listener at localhost.localdomain/38007] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36801 2023-07-12 19:17:34,724 WARN [Listener at localhost.localdomain/40977] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:34,743 WARN [Listener at localhost.localdomain/40977] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-12 19:17:34,801 WARN [Listener at localhost.localdomain/40977] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:34,806 WARN [Listener at localhost.localdomain/40977] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:34,808 INFO [Listener at localhost.localdomain/40977] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:34,818 INFO [Listener at localhost.localdomain/40977] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/java.io.tmpdir/Jetty_localhost_43703_datanode____5prtoi/webapp 2023-07-12 19:17:34,842 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x17c8b40b0cbaacec: Processing first storage report for DS-fca02bc4-2d41-4686-aa59-e82f4591fab6 from datanode 4570a816-a826-4ebd-81f1-c9aa10d62f4f 2023-07-12 19:17:34,843 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x17c8b40b0cbaacec: from storage DS-fca02bc4-2d41-4686-aa59-e82f4591fab6 node DatanodeRegistration(127.0.0.1:34477, datanodeUuid=4570a816-a826-4ebd-81f1-c9aa10d62f4f, infoPort=36365, infoSecurePort=0, ipcPort=40977, storageInfo=lv=-57;cid=testClusterID;nsid=1739669361;c=1689189454435), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:34,843 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x17c8b40b0cbaacec: Processing first storage report for DS-0b3e8df0-0f14-4a19-a041-45d445de153f from datanode 4570a816-a826-4ebd-81f1-c9aa10d62f4f 2023-07-12 19:17:34,843 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x17c8b40b0cbaacec: from storage DS-0b3e8df0-0f14-4a19-a041-45d445de153f node DatanodeRegistration(127.0.0.1:34477, datanodeUuid=4570a816-a826-4ebd-81f1-c9aa10d62f4f, infoPort=36365, infoSecurePort=0, ipcPort=40977, storageInfo=lv=-57;cid=testClusterID;nsid=1739669361;c=1689189454435), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:34,911 INFO [Listener at localhost.localdomain/40977] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43703 2023-07-12 19:17:34,923 WARN [Listener at localhost.localdomain/42045] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:34,951 WARN [Listener at localhost.localdomain/42045] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:34,954 WARN [Listener at localhost.localdomain/42045] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:34,955 INFO [Listener at localhost.localdomain/42045] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:34,966 INFO [Listener at localhost.localdomain/42045] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/java.io.tmpdir/Jetty_localhost_45817_datanode____.e0blng/webapp 2023-07-12 19:17:35,010 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb2b72e0bb085b5dc: Processing first storage report for DS-99857824-e819-44ab-a75d-a9efdd44967c from datanode 7d35d8fb-23cc-4979-b994-c57640d611f6 2023-07-12 19:17:35,010 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb2b72e0bb085b5dc: from storage DS-99857824-e819-44ab-a75d-a9efdd44967c node DatanodeRegistration(127.0.0.1:39465, datanodeUuid=7d35d8fb-23cc-4979-b994-c57640d611f6, infoPort=37427, infoSecurePort=0, ipcPort=42045, storageInfo=lv=-57;cid=testClusterID;nsid=1739669361;c=1689189454435), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:35,010 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb2b72e0bb085b5dc: Processing first storage report for DS-8fe57699-1a1e-43b0-b757-d97a0591f80b from datanode 7d35d8fb-23cc-4979-b994-c57640d611f6 2023-07-12 19:17:35,010 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb2b72e0bb085b5dc: from storage DS-8fe57699-1a1e-43b0-b757-d97a0591f80b node DatanodeRegistration(127.0.0.1:39465, datanodeUuid=7d35d8fb-23cc-4979-b994-c57640d611f6, infoPort=37427, infoSecurePort=0, ipcPort=42045, storageInfo=lv=-57;cid=testClusterID;nsid=1739669361;c=1689189454435), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:35,065 INFO [Listener at localhost.localdomain/42045] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45817 2023-07-12 19:17:35,091 WARN [Listener at localhost.localdomain/37875] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:35,164 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc92f25820033216f: Processing first storage report for DS-46458d06-2591-4caf-9337-88a74171264e from datanode 67822a5b-6b2d-4bcc-80f8-a196403b515b 2023-07-12 19:17:35,164 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc92f25820033216f: from storage DS-46458d06-2591-4caf-9337-88a74171264e node DatanodeRegistration(127.0.0.1:39117, datanodeUuid=67822a5b-6b2d-4bcc-80f8-a196403b515b, infoPort=43803, infoSecurePort=0, ipcPort=37875, storageInfo=lv=-57;cid=testClusterID;nsid=1739669361;c=1689189454435), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:35,164 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc92f25820033216f: Processing first storage report for DS-205862bf-8c20-4672-b197-675a53398a81 from datanode 67822a5b-6b2d-4bcc-80f8-a196403b515b 2023-07-12 19:17:35,165 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc92f25820033216f: from storage DS-205862bf-8c20-4672-b197-675a53398a81 node DatanodeRegistration(127.0.0.1:39117, datanodeUuid=67822a5b-6b2d-4bcc-80f8-a196403b515b, infoPort=43803, infoSecurePort=0, ipcPort=37875, storageInfo=lv=-57;cid=testClusterID;nsid=1739669361;c=1689189454435), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:35,203 DEBUG [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d 2023-07-12 19:17:35,205 INFO [Listener at localhost.localdomain/37875] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/zookeeper_0, clientPort=51847, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 19:17:35,207 INFO [Listener at localhost.localdomain/37875] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51847 2023-07-12 19:17:35,207 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,208 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,227 INFO [Listener at localhost.localdomain/37875] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e with version=8 2023-07-12 19:17:35,227 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/hbase-staging 2023-07-12 19:17:35,228 DEBUG [Listener at localhost.localdomain/37875] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 19:17:35,229 DEBUG [Listener at localhost.localdomain/37875] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 19:17:35,229 DEBUG [Listener at localhost.localdomain/37875] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 19:17:35,229 DEBUG [Listener at localhost.localdomain/37875] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 19:17:35,230 INFO [Listener at localhost.localdomain/37875] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:35,230 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,230 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,231 INFO [Listener at localhost.localdomain/37875] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:35,231 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,231 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:35,231 INFO [Listener at localhost.localdomain/37875] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:35,233 INFO [Listener at localhost.localdomain/37875] ipc.NettyRpcServer(120): Bind to /148.251.75.209:40539 2023-07-12 19:17:35,234 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,235 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,236 INFO [Listener at localhost.localdomain/37875] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40539 connecting to ZooKeeper ensemble=127.0.0.1:51847 2023-07-12 19:17:35,245 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:405390x0, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:35,249 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40539-0x100829e11e40000 connected 2023-07-12 19:17:35,328 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:35,328 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:35,329 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:35,329 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40539 2023-07-12 19:17:35,330 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40539 2023-07-12 19:17:35,330 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40539 2023-07-12 19:17:35,331 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40539 2023-07-12 19:17:35,331 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40539 2023-07-12 19:17:35,334 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:35,334 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:35,334 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:35,335 INFO [Listener at localhost.localdomain/37875] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 19:17:35,335 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:35,335 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:35,335 INFO [Listener at localhost.localdomain/37875] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:35,336 INFO [Listener at localhost.localdomain/37875] http.HttpServer(1146): Jetty bound to port 34451 2023-07-12 19:17:35,336 INFO [Listener at localhost.localdomain/37875] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:35,344 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,345 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7ea17ba6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:35,345 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,346 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c44c63b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:35,354 INFO [Listener at localhost.localdomain/37875] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:35,355 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:35,355 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:35,355 INFO [Listener at localhost.localdomain/37875] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:35,356 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,357 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1c9fa2d2{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 19:17:35,358 INFO [Listener at localhost.localdomain/37875] server.AbstractConnector(333): Started ServerConnector@4c68eabb{HTTP/1.1, (http/1.1)}{0.0.0.0:34451} 2023-07-12 19:17:35,358 INFO [Listener at localhost.localdomain/37875] server.Server(415): Started @37331ms 2023-07-12 19:17:35,359 INFO [Listener at localhost.localdomain/37875] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e, hbase.cluster.distributed=false 2023-07-12 19:17:35,375 INFO [Listener at localhost.localdomain/37875] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:35,375 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,375 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,375 INFO [Listener at localhost.localdomain/37875] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:35,375 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,375 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:35,376 INFO [Listener at localhost.localdomain/37875] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:35,378 INFO [Listener at localhost.localdomain/37875] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36109 2023-07-12 19:17:35,379 INFO [Listener at localhost.localdomain/37875] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:35,383 DEBUG [Listener at localhost.localdomain/37875] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:35,384 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,386 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,388 INFO [Listener at localhost.localdomain/37875] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36109 connecting to ZooKeeper ensemble=127.0.0.1:51847 2023-07-12 19:17:35,397 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:361090x0, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:35,398 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:361090x0, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:35,399 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:361090x0, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:35,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36109-0x100829e11e40001 connected 2023-07-12 19:17:35,402 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:35,414 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36109 2023-07-12 19:17:35,416 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36109 2023-07-12 19:17:35,416 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36109 2023-07-12 19:17:35,418 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36109 2023-07-12 19:17:35,426 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36109 2023-07-12 19:17:35,429 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:35,430 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:35,430 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:35,431 INFO [Listener at localhost.localdomain/37875] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:35,431 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:35,431 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:35,431 INFO [Listener at localhost.localdomain/37875] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:35,433 INFO [Listener at localhost.localdomain/37875] http.HttpServer(1146): Jetty bound to port 41229 2023-07-12 19:17:35,433 INFO [Listener at localhost.localdomain/37875] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:35,451 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,451 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69a66844{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:35,452 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,452 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ae4a868{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:35,460 INFO [Listener at localhost.localdomain/37875] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:35,461 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:35,461 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:35,462 INFO [Listener at localhost.localdomain/37875] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 19:17:35,463 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,464 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5b2a6fdb{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:35,465 INFO [Listener at localhost.localdomain/37875] server.AbstractConnector(333): Started ServerConnector@477f3f82{HTTP/1.1, (http/1.1)}{0.0.0.0:41229} 2023-07-12 19:17:35,465 INFO [Listener at localhost.localdomain/37875] server.Server(415): Started @37438ms 2023-07-12 19:17:35,482 INFO [Listener at localhost.localdomain/37875] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:35,482 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,482 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,482 INFO [Listener at localhost.localdomain/37875] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:35,483 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,483 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:35,483 INFO [Listener at localhost.localdomain/37875] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:35,485 INFO [Listener at localhost.localdomain/37875] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38905 2023-07-12 19:17:35,486 INFO [Listener at localhost.localdomain/37875] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:35,490 DEBUG [Listener at localhost.localdomain/37875] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:35,491 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,493 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,494 INFO [Listener at localhost.localdomain/37875] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38905 connecting to ZooKeeper ensemble=127.0.0.1:51847 2023-07-12 19:17:35,498 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:389050x0, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:35,500 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:389050x0, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:35,501 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38905-0x100829e11e40002 connected 2023-07-12 19:17:35,501 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:35,502 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:35,502 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38905 2023-07-12 19:17:35,502 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38905 2023-07-12 19:17:35,503 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38905 2023-07-12 19:17:35,504 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38905 2023-07-12 19:17:35,504 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38905 2023-07-12 19:17:35,506 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:35,506 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:35,507 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:35,507 INFO [Listener at localhost.localdomain/37875] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:35,507 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:35,508 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:35,508 INFO [Listener at localhost.localdomain/37875] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:35,508 INFO [Listener at localhost.localdomain/37875] http.HttpServer(1146): Jetty bound to port 36725 2023-07-12 19:17:35,509 INFO [Listener at localhost.localdomain/37875] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:35,511 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,511 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5b2db004{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:35,511 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,512 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@54fa8dac{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:35,519 INFO [Listener at localhost.localdomain/37875] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:35,520 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:35,520 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:35,520 INFO [Listener at localhost.localdomain/37875] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 19:17:35,522 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,522 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@58cf4c89{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:35,524 INFO [Listener at localhost.localdomain/37875] server.AbstractConnector(333): Started ServerConnector@52ffdf29{HTTP/1.1, (http/1.1)}{0.0.0.0:36725} 2023-07-12 19:17:35,524 INFO [Listener at localhost.localdomain/37875] server.Server(415): Started @37496ms 2023-07-12 19:17:35,534 INFO [Listener at localhost.localdomain/37875] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:35,535 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,535 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,535 INFO [Listener at localhost.localdomain/37875] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:35,536 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:35,536 INFO [Listener at localhost.localdomain/37875] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:35,536 INFO [Listener at localhost.localdomain/37875] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:35,538 INFO [Listener at localhost.localdomain/37875] ipc.NettyRpcServer(120): Bind to /148.251.75.209:42773 2023-07-12 19:17:35,538 INFO [Listener at localhost.localdomain/37875] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:35,539 DEBUG [Listener at localhost.localdomain/37875] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:35,540 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,541 INFO [Listener at localhost.localdomain/37875] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,542 INFO [Listener at localhost.localdomain/37875] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42773 connecting to ZooKeeper ensemble=127.0.0.1:51847 2023-07-12 19:17:35,546 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:427730x0, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:35,548 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:427730x0, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:35,548 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42773-0x100829e11e40003 connected 2023-07-12 19:17:35,549 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:35,549 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ZKUtil(164): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:35,554 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42773 2023-07-12 19:17:35,554 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42773 2023-07-12 19:17:35,556 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42773 2023-07-12 19:17:35,562 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42773 2023-07-12 19:17:35,565 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42773 2023-07-12 19:17:35,568 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:35,568 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:35,568 INFO [Listener at localhost.localdomain/37875] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:35,569 INFO [Listener at localhost.localdomain/37875] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:35,569 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:35,569 INFO [Listener at localhost.localdomain/37875] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:35,570 INFO [Listener at localhost.localdomain/37875] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:35,570 INFO [Listener at localhost.localdomain/37875] http.HttpServer(1146): Jetty bound to port 42777 2023-07-12 19:17:35,571 INFO [Listener at localhost.localdomain/37875] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:35,576 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,577 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2135cae7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:35,577 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,578 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2dd89fa0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:35,585 INFO [Listener at localhost.localdomain/37875] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:35,586 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:35,586 INFO [Listener at localhost.localdomain/37875] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:35,587 INFO [Listener at localhost.localdomain/37875] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:35,591 INFO [Listener at localhost.localdomain/37875] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:35,592 INFO [Listener at localhost.localdomain/37875] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e153a1b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:35,594 INFO [Listener at localhost.localdomain/37875] server.AbstractConnector(333): Started ServerConnector@2fe84205{HTTP/1.1, (http/1.1)}{0.0.0.0:42777} 2023-07-12 19:17:35,595 INFO [Listener at localhost.localdomain/37875] server.Server(415): Started @37567ms 2023-07-12 19:17:35,600 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:35,610 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@37e1d235{HTTP/1.1, (http/1.1)}{0.0.0.0:35865} 2023-07-12 19:17:35,611 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @37583ms 2023-07-12 19:17:35,611 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:35,611 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 19:17:35,612 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:35,613 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:35,613 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:35,613 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:35,613 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:35,614 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:35,615 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:35,616 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,40539,1689189455229 from backup master directory 2023-07-12 19:17:35,617 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:35,618 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:35,618 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 19:17:35,618 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:35,618 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:35,644 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/hbase.id with ID: 97a76881-5652-4d5f-bcf3-90e04058e284 2023-07-12 19:17:35,658 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:35,660 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:35,678 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5a503f68 to 127.0.0.1:51847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:35,686 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33e7bd7f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:35,686 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:35,687 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 19:17:35,687 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:35,689 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store-tmp 2023-07-12 19:17:35,701 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:35,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 19:17:35,702 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:35,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:35,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 19:17:35,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:35,702 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:35,702 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:35,704 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/WALs/jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:35,708 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C40539%2C1689189455229, suffix=, logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/WALs/jenkins-hbase20.apache.org,40539,1689189455229, archiveDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/oldWALs, maxLogs=10 2023-07-12 19:17:35,725 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK] 2023-07-12 19:17:35,725 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK] 2023-07-12 19:17:35,727 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK] 2023-07-12 19:17:35,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/WALs/jenkins-hbase20.apache.org,40539,1689189455229/jenkins-hbase20.apache.org%2C40539%2C1689189455229.1689189455708 2023-07-12 19:17:35,734 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK], DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK], DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK]] 2023-07-12 19:17:35,734 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:35,735 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:35,735 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:35,735 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:35,738 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:35,744 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 19:17:35,745 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 19:17:35,745 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:35,746 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:35,747 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:35,749 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:35,752 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:35,753 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11614976000, jitterRate=0.08172893524169922}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:35,753 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:35,753 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 19:17:35,755 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 19:17:35,755 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 19:17:35,755 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 19:17:35,756 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 19:17:35,756 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 19:17:35,756 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 19:17:35,757 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 19:17:35,758 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 19:17:35,759 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 19:17:35,759 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 19:17:35,759 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 19:17:35,764 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:35,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 19:17:35,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 19:17:35,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 19:17:35,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:35,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:35,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:35,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:35,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:35,768 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,40539,1689189455229, sessionid=0x100829e11e40000, setting cluster-up flag (Was=false) 2023-07-12 19:17:35,771 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:35,773 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 19:17:35,774 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:35,776 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:35,778 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 19:17:35,779 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:35,780 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.hbase-snapshot/.tmp 2023-07-12 19:17:35,784 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 19:17:35,784 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 19:17:35,785 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 19:17:35,785 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:35,786 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 19:17:35,786 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-12 19:17:35,787 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:35,799 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(951): ClusterId : 97a76881-5652-4d5f-bcf3-90e04058e284 2023-07-12 19:17:35,799 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(951): ClusterId : 97a76881-5652-4d5f-bcf3-90e04058e284 2023-07-12 19:17:35,801 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 19:17:35,801 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 19:17:35,802 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 19:17:35,802 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:35,802 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,805 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:35,806 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(951): ClusterId : 97a76881-5652-4d5f-bcf3-90e04058e284 2023-07-12 19:17:35,807 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:35,807 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:35,809 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:35,813 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:35,813 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:35,820 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689189485820 2023-07-12 19:17:35,820 DEBUG [RS:0;jenkins-hbase20:36109] zookeeper.ReadOnlyZKClient(139): Connect 0x57ad86ce to 127.0.0.1:51847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:35,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 19:17:35,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 19:17:35,821 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:35,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 19:17:35,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 19:17:35,821 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 19:17:35,821 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 19:17:35,823 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 19:17:35,826 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,828 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:35,828 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:35,828 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 19:17:35,828 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:35,828 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:35,829 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 19:17:35,829 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:35,830 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 19:17:35,834 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 19:17:35,835 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 19:17:35,836 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:35,836 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:35,838 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189455835,5,FailOnTimeoutGroup] 2023-07-12 19:17:35,840 DEBUG [RS:0;jenkins-hbase20:36109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@596a724b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:35,840 DEBUG [RS:1;jenkins-hbase20:38905] zookeeper.ReadOnlyZKClient(139): Connect 0x583cb745 to 127.0.0.1:51847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:35,840 DEBUG [RS:0;jenkins-hbase20:36109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d55edbe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:35,840 DEBUG [RS:2;jenkins-hbase20:42773] zookeeper.ReadOnlyZKClient(139): Connect 0x33fa7b88 to 127.0.0.1:51847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:35,842 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189455839,5,FailOnTimeoutGroup] 2023-07-12 19:17:35,842 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,846 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 19:17:35,846 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,846 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,855 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:36109 2023-07-12 19:17:35,855 INFO [RS:0;jenkins-hbase20:36109] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:35,855 INFO [RS:0;jenkins-hbase20:36109] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:35,855 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:35,856 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,40539,1689189455229 with isa=jenkins-hbase20.apache.org/148.251.75.209:36109, startcode=1689189455374 2023-07-12 19:17:35,856 DEBUG [RS:0;jenkins-hbase20:36109] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:35,867 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:35,868 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:35,868 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e 2023-07-12 19:17:35,873 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39011, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:35,885 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40539] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,885 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:35,886 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-12 19:17:35,891 DEBUG [RS:1;jenkins-hbase20:38905] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52c4dc35, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:35,891 DEBUG [RS:1;jenkins-hbase20:38905] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17c64809, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:35,891 DEBUG [RS:2;jenkins-hbase20:42773] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@390210f8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:35,891 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e 2023-07-12 19:17:35,891 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38007 2023-07-12 19:17:35,891 DEBUG [RS:2;jenkins-hbase20:42773] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c0089bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:35,891 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34451 2023-07-12 19:17:35,892 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:35,893 DEBUG [RS:0;jenkins-hbase20:36109] zookeeper.ZKUtil(162): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,893 WARN [RS:0;jenkins-hbase20:36109] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:35,893 INFO [RS:0;jenkins-hbase20:36109] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:35,893 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,897 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36109,1689189455374] 2023-07-12 19:17:35,905 DEBUG [RS:0;jenkins-hbase20:36109] zookeeper.ZKUtil(162): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,906 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:35,906 INFO [RS:0;jenkins-hbase20:36109] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:35,906 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:35,908 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 19:17:35,908 INFO [RS:0;jenkins-hbase20:36109] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:35,909 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/info 2023-07-12 19:17:35,910 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 19:17:35,910 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:35,910 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 19:17:35,910 INFO [RS:0;jenkins-hbase20:36109] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:35,911 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,911 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:35,911 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:42773 2023-07-12 19:17:35,912 INFO [RS:2;jenkins-hbase20:42773] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:35,912 INFO [RS:2;jenkins-hbase20:42773] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:35,912 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:35,912 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,40539,1689189455229 with isa=jenkins-hbase20.apache.org/148.251.75.209:42773, startcode=1689189455534 2023-07-12 19:17:35,913 DEBUG [RS:2;jenkins-hbase20:42773] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:35,913 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,914 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,915 DEBUG [RS:0;jenkins-hbase20:36109] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,915 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:38905 2023-07-12 19:17:35,915 INFO [RS:1;jenkins-hbase20:38905] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:35,915 INFO [RS:1;jenkins-hbase20:38905] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:35,915 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:35,915 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:35,915 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,40539,1689189455229 with isa=jenkins-hbase20.apache.org/148.251.75.209:38905, startcode=1689189455481 2023-07-12 19:17:35,915 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 19:17:35,915 DEBUG [RS:1;jenkins-hbase20:38905] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:35,916 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:35,916 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 19:17:35,919 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/table 2023-07-12 19:17:35,920 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 19:17:35,920 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:35,924 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:57467, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:35,924 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740 2023-07-12 19:17:35,924 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,925 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40539] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:35,925 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,926 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,926 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,926 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:35,926 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740 2023-07-12 19:17:35,926 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 19:17:35,926 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e 2023-07-12 19:17:35,926 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38007 2023-07-12 19:17:35,926 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34451 2023-07-12 19:17:35,928 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 19:17:35,929 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:35,929 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:35,929 DEBUG [RS:1;jenkins-hbase20:38905] zookeeper.ZKUtil(162): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:35,929 WARN [RS:1;jenkins-hbase20:38905] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:35,929 INFO [RS:1;jenkins-hbase20:38905] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:35,929 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:35,929 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:35,929 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,930 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 19:17:35,932 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58443, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:35,932 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38905,1689189455481] 2023-07-12 19:17:35,932 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40539] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:35,932 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:35,932 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 19:17:35,932 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e 2023-07-12 19:17:35,933 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38007 2023-07-12 19:17:35,933 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34451 2023-07-12 19:17:35,943 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:35,943 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:35,944 DEBUG [RS:2;jenkins-hbase20:42773] zookeeper.ZKUtil(162): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:35,944 WARN [RS:2;jenkins-hbase20:42773] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:35,944 INFO [RS:2;jenkins-hbase20:42773] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:35,944 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:35,944 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:35,944 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:35,944 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,42773,1689189455534] 2023-07-12 19:17:35,945 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,953 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:35,956 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11180921440, jitterRate=0.04130445420742035}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 19:17:35,956 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 19:17:35,956 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 19:17:35,956 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 19:17:35,956 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 19:17:35,956 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 19:17:35,956 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 19:17:35,957 DEBUG [RS:1;jenkins-hbase20:38905] zookeeper.ZKUtil(162): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:35,957 INFO [RS:0;jenkins-hbase20:36109] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:35,957 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36109,1689189455374-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,957 DEBUG [RS:1;jenkins-hbase20:38905] zookeeper.ZKUtil(162): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:35,958 DEBUG [RS:1;jenkins-hbase20:38905] zookeeper.ZKUtil(162): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,960 DEBUG [RS:2;jenkins-hbase20:42773] zookeeper.ZKUtil(162): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:35,961 DEBUG [RS:2;jenkins-hbase20:42773] zookeeper.ZKUtil(162): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:35,961 DEBUG [RS:2;jenkins-hbase20:42773] zookeeper.ZKUtil(162): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:35,962 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:35,962 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:35,962 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 19:17:35,962 INFO [RS:1;jenkins-hbase20:38905] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:35,963 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:35,963 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 19:17:35,963 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 19:17:35,964 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:35,964 INFO [RS:2;jenkins-hbase20:42773] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:35,965 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 19:17:35,965 INFO [RS:1;jenkins-hbase20:38905] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:35,971 INFO [RS:1;jenkins-hbase20:38905] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:35,971 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,971 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 19:17:35,972 INFO [RS:2;jenkins-hbase20:42773] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:35,972 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:35,973 INFO [RS:2;jenkins-hbase20:42773] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:35,973 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,977 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,977 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:35,977 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,977 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 DEBUG [RS:1;jenkins-hbase20:38905] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,978 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,979 DEBUG [RS:2;jenkins-hbase20:42773] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:35,988 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,989 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,989 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,989 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,990 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,991 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,991 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:35,991 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,000 INFO [RS:0;jenkins-hbase20:36109] regionserver.Replication(203): jenkins-hbase20.apache.org,36109,1689189455374 started 2023-07-12 19:17:36,001 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36109,1689189455374, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36109, sessionid=0x100829e11e40001 2023-07-12 19:17:36,001 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:36,001 DEBUG [RS:0;jenkins-hbase20:36109] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:36,001 DEBUG [RS:0;jenkins-hbase20:36109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36109,1689189455374' 2023-07-12 19:17:36,001 DEBUG [RS:0;jenkins-hbase20:36109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:36,005 DEBUG [RS:0;jenkins-hbase20:36109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:36,006 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:36,006 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:36,006 DEBUG [RS:0;jenkins-hbase20:36109] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:36,006 DEBUG [RS:0;jenkins-hbase20:36109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36109,1689189455374' 2023-07-12 19:17:36,006 DEBUG [RS:0;jenkins-hbase20:36109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:36,007 DEBUG [RS:0;jenkins-hbase20:36109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:36,008 DEBUG [RS:0;jenkins-hbase20:36109] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:36,008 INFO [RS:0;jenkins-hbase20:36109] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 19:17:36,011 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,011 DEBUG [RS:0;jenkins-hbase20:36109] zookeeper.ZKUtil(398): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 19:17:36,012 INFO [RS:0;jenkins-hbase20:36109] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 19:17:36,012 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,013 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,016 INFO [RS:1;jenkins-hbase20:38905] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:36,016 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38905,1689189455481-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,018 INFO [RS:2;jenkins-hbase20:42773] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:36,019 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,42773,1689189455534-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,030 INFO [RS:1;jenkins-hbase20:38905] regionserver.Replication(203): jenkins-hbase20.apache.org,38905,1689189455481 started 2023-07-12 19:17:36,030 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38905,1689189455481, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38905, sessionid=0x100829e11e40002 2023-07-12 19:17:36,030 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:36,030 DEBUG [RS:1;jenkins-hbase20:38905] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:36,030 DEBUG [RS:1;jenkins-hbase20:38905] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38905,1689189455481' 2023-07-12 19:17:36,030 DEBUG [RS:1;jenkins-hbase20:38905] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:36,030 DEBUG [RS:1;jenkins-hbase20:38905] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:36,031 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:36,031 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:36,031 DEBUG [RS:1;jenkins-hbase20:38905] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:36,031 DEBUG [RS:1;jenkins-hbase20:38905] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38905,1689189455481' 2023-07-12 19:17:36,032 DEBUG [RS:1;jenkins-hbase20:38905] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:36,032 DEBUG [RS:1;jenkins-hbase20:38905] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:36,032 DEBUG [RS:1;jenkins-hbase20:38905] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:36,032 INFO [RS:1;jenkins-hbase20:38905] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 19:17:36,032 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,033 DEBUG [RS:1;jenkins-hbase20:38905] zookeeper.ZKUtil(398): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 19:17:36,033 INFO [RS:1;jenkins-hbase20:38905] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 19:17:36,033 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,033 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,040 INFO [RS:2;jenkins-hbase20:42773] regionserver.Replication(203): jenkins-hbase20.apache.org,42773,1689189455534 started 2023-07-12 19:17:36,040 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,42773,1689189455534, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:42773, sessionid=0x100829e11e40003 2023-07-12 19:17:36,040 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:36,040 DEBUG [RS:2;jenkins-hbase20:42773] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:36,041 DEBUG [RS:2;jenkins-hbase20:42773] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42773,1689189455534' 2023-07-12 19:17:36,041 DEBUG [RS:2;jenkins-hbase20:42773] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:36,041 DEBUG [RS:2;jenkins-hbase20:42773] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:36,042 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:36,042 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:36,042 DEBUG [RS:2;jenkins-hbase20:42773] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:36,042 DEBUG [RS:2;jenkins-hbase20:42773] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,42773,1689189455534' 2023-07-12 19:17:36,042 DEBUG [RS:2;jenkins-hbase20:42773] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:36,042 DEBUG [RS:2;jenkins-hbase20:42773] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:36,043 DEBUG [RS:2;jenkins-hbase20:42773] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:36,043 INFO [RS:2;jenkins-hbase20:42773] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-12 19:17:36,043 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,043 DEBUG [RS:2;jenkins-hbase20:42773] zookeeper.ZKUtil(398): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-12 19:17:36,043 INFO [RS:2;jenkins-hbase20:42773] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-12 19:17:36,043 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,043 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,117 INFO [RS:0;jenkins-hbase20:36109] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36109%2C1689189455374, suffix=, logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,36109,1689189455374, archiveDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs, maxLogs=32 2023-07-12 19:17:36,122 DEBUG [jenkins-hbase20:40539] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 19:17:36,122 DEBUG [jenkins-hbase20:40539] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:36,122 DEBUG [jenkins-hbase20:40539] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:36,122 DEBUG [jenkins-hbase20:40539] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:36,122 DEBUG [jenkins-hbase20:40539] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:36,122 DEBUG [jenkins-hbase20:40539] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:36,124 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42773,1689189455534, state=OPENING 2023-07-12 19:17:36,125 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 19:17:36,125 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:36,126 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 19:17:36,127 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42773,1689189455534}] 2023-07-12 19:17:36,135 INFO [RS:1;jenkins-hbase20:38905] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38905%2C1689189455481, suffix=, logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,38905,1689189455481, archiveDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs, maxLogs=32 2023-07-12 19:17:36,146 INFO [RS:2;jenkins-hbase20:42773] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42773%2C1689189455534, suffix=, logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,42773,1689189455534, archiveDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs, maxLogs=32 2023-07-12 19:17:36,156 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK] 2023-07-12 19:17:36,156 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK] 2023-07-12 19:17:36,157 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK] 2023-07-12 19:17:36,172 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK] 2023-07-12 19:17:36,172 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK] 2023-07-12 19:17:36,172 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK] 2023-07-12 19:17:36,173 INFO [RS:0;jenkins-hbase20:36109] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,36109,1689189455374/jenkins-hbase20.apache.org%2C36109%2C1689189455374.1689189456119 2023-07-12 19:17:36,194 DEBUG [RS:0;jenkins-hbase20:36109] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK], DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK], DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK]] 2023-07-12 19:17:36,198 INFO [RS:1;jenkins-hbase20:38905] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,38905,1689189455481/jenkins-hbase20.apache.org%2C38905%2C1689189455481.1689189456136 2023-07-12 19:17:36,202 DEBUG [RS:1;jenkins-hbase20:38905] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK], DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK], DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK]] 2023-07-12 19:17:36,207 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK] 2023-07-12 19:17:36,207 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK] 2023-07-12 19:17:36,217 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK] 2023-07-12 19:17:36,224 INFO [RS:2;jenkins-hbase20:42773] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,42773,1689189455534/jenkins-hbase20.apache.org%2C42773%2C1689189455534.1689189456147 2023-07-12 19:17:36,225 DEBUG [RS:2;jenkins-hbase20:42773] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK], DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK], DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK]] 2023-07-12 19:17:36,285 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:36,285 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:36,289 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48956, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:36,299 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 19:17:36,299 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:36,301 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C42773%2C1689189455534.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,42773,1689189455534, archiveDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs, maxLogs=32 2023-07-12 19:17:36,328 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK] 2023-07-12 19:17:36,335 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK] 2023-07-12 19:17:36,335 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK] 2023-07-12 19:17:36,341 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,42773,1689189455534/jenkins-hbase20.apache.org%2C42773%2C1689189455534.meta.1689189456302.meta 2023-07-12 19:17:36,342 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39117,DS-46458d06-2591-4caf-9337-88a74171264e,DISK], DatanodeInfoWithStorage[127.0.0.1:39465,DS-99857824-e819-44ab-a75d-a9efdd44967c,DISK], DatanodeInfoWithStorage[127.0.0.1:34477,DS-fca02bc4-2d41-4686-aa59-e82f4591fab6,DISK]] 2023-07-12 19:17:36,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:36,343 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 19:17:36,344 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 19:17:36,344 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 19:17:36,344 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 19:17:36,345 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:36,345 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 19:17:36,345 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 19:17:36,355 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 19:17:36,357 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/info 2023-07-12 19:17:36,357 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/info 2023-07-12 19:17:36,358 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 19:17:36,362 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:36,362 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 19:17:36,363 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:36,364 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:36,365 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 19:17:36,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:36,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 19:17:36,368 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/table 2023-07-12 19:17:36,368 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/table 2023-07-12 19:17:36,369 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 19:17:36,370 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:36,377 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740 2023-07-12 19:17:36,379 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740 2023-07-12 19:17:36,384 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 19:17:36,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 19:17:36,394 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9724631840, jitterRate=-0.09432308375835419}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 19:17:36,394 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 19:17:36,396 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689189456285 2023-07-12 19:17:36,401 WARN [ReadOnlyZKClient-127.0.0.1:51847@0x5a503f68] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 19:17:36,404 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:36,407 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 19:17:36,410 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48964, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:36,410 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,42773,1689189455534, state=OPEN 2023-07-12 19:17:36,411 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 19:17:36,413 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 19:17:36,413 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:36,413 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 19:17:36,415 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 19:17:36,417 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 19:17:36,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 19:17:36,421 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,42773,1689189455534 in 286 msec 2023-07-12 19:17:36,423 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 19:17:36,424 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 458 msec 2023-07-12 19:17:36,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 639 msec 2023-07-12 19:17:36,428 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689189456428, completionTime=-1 2023-07-12 19:17:36,428 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 19:17:36,428 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 19:17:36,431 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 19:17:36,431 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689189516431 2023-07-12 19:17:36,431 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689189576431 2023-07-12 19:17:36,431 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-12 19:17:36,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40539,1689189455229-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40539,1689189455229-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40539,1689189455229-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:40539, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 19:17:36,440 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:36,440 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:36,445 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 19:17:36,446 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:36,448 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,449 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc empty. 2023-07-12 19:17:36,450 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,450 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 19:17:36,464 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 19:17:36,469 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:36,470 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:36,472 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,473 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b empty. 2023-07-12 19:17:36,473 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,473 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 19:17:36,512 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:36,513 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a998cdea7295a266c95a5cb722f0c6bc, NAME => 'hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp 2023-07-12 19:17:36,524 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:36,529 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 21d409e1c713baa8e90655fe26d7ba8b, NAME => 'hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp 2023-07-12 19:17:36,547 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:36,547 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a998cdea7295a266c95a5cb722f0c6bc, disabling compactions & flushes 2023-07-12 19:17:36,547 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:36,547 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:36,547 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. after waiting 0 ms 2023-07-12 19:17:36,547 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:36,547 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:36,547 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a998cdea7295a266c95a5cb722f0c6bc: 2023-07-12 19:17:36,552 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:36,553 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189456553"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189456553"}]},"ts":"1689189456553"} 2023-07-12 19:17:36,553 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:36,553 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 21d409e1c713baa8e90655fe26d7ba8b, disabling compactions & flushes 2023-07-12 19:17:36,553 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:36,553 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:36,553 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. after waiting 0 ms 2023-07-12 19:17:36,553 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:36,553 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:36,553 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 21d409e1c713baa8e90655fe26d7ba8b: 2023-07-12 19:17:36,555 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:36,556 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:36,556 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:36,556 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189456556"}]},"ts":"1689189456556"} 2023-07-12 19:17:36,557 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189456557"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189456557"}]},"ts":"1689189456557"} 2023-07-12 19:17:36,563 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 19:17:36,564 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:36,565 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:36,565 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189456565"}]},"ts":"1689189456565"} 2023-07-12 19:17:36,567 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 19:17:36,573 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:36,573 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:36,573 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:36,573 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:36,574 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:36,574 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:36,574 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:36,574 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:36,574 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:36,574 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:36,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a998cdea7295a266c95a5cb722f0c6bc, ASSIGN}] 2023-07-12 19:17:36,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=21d409e1c713baa8e90655fe26d7ba8b, ASSIGN}] 2023-07-12 19:17:36,576 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a998cdea7295a266c95a5cb722f0c6bc, ASSIGN 2023-07-12 19:17:36,577 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=21d409e1c713baa8e90655fe26d7ba8b, ASSIGN 2023-07-12 19:17:36,578 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=21d409e1c713baa8e90655fe26d7ba8b, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36109,1689189455374; forceNewPlan=false, retain=false 2023-07-12 19:17:36,578 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a998cdea7295a266c95a5cb722f0c6bc, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,42773,1689189455534; forceNewPlan=false, retain=false 2023-07-12 19:17:36,578 INFO [jenkins-hbase20:40539] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 19:17:36,581 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=21d409e1c713baa8e90655fe26d7ba8b, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:36,581 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a998cdea7295a266c95a5cb722f0c6bc, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:36,581 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189456581"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189456581"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189456581"}]},"ts":"1689189456581"} 2023-07-12 19:17:36,581 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189456581"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189456581"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189456581"}]},"ts":"1689189456581"} 2023-07-12 19:17:36,586 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 21d409e1c713baa8e90655fe26d7ba8b, server=jenkins-hbase20.apache.org,36109,1689189455374}] 2023-07-12 19:17:36,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure a998cdea7295a266c95a5cb722f0c6bc, server=jenkins-hbase20.apache.org,42773,1689189455534}] 2023-07-12 19:17:36,739 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:36,739 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:36,743 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34116, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:36,747 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:36,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a998cdea7295a266c95a5cb722f0c6bc, NAME => 'hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:36,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 19:17:36,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. service=MultiRowMutationService 2023-07-12 19:17:36,748 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 19:17:36,748 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,748 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:36,748 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,748 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,750 INFO [StoreOpener-a998cdea7295a266c95a5cb722f0c6bc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,750 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:36,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21d409e1c713baa8e90655fe26d7ba8b, NAME => 'hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:36,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:36,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,752 DEBUG [StoreOpener-a998cdea7295a266c95a5cb722f0c6bc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/m 2023-07-12 19:17:36,752 DEBUG [StoreOpener-a998cdea7295a266c95a5cb722f0c6bc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/m 2023-07-12 19:17:36,752 INFO [StoreOpener-a998cdea7295a266c95a5cb722f0c6bc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a998cdea7295a266c95a5cb722f0c6bc columnFamilyName m 2023-07-12 19:17:36,753 INFO [StoreOpener-21d409e1c713baa8e90655fe26d7ba8b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,753 INFO [StoreOpener-a998cdea7295a266c95a5cb722f0c6bc-1] regionserver.HStore(310): Store=a998cdea7295a266c95a5cb722f0c6bc/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:36,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,755 DEBUG [StoreOpener-21d409e1c713baa8e90655fe26d7ba8b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/info 2023-07-12 19:17:36,755 DEBUG [StoreOpener-21d409e1c713baa8e90655fe26d7ba8b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/info 2023-07-12 19:17:36,755 INFO [StoreOpener-21d409e1c713baa8e90655fe26d7ba8b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21d409e1c713baa8e90655fe26d7ba8b columnFamilyName info 2023-07-12 19:17:36,756 INFO [StoreOpener-21d409e1c713baa8e90655fe26d7ba8b-1] regionserver.HStore(310): Store=21d409e1c713baa8e90655fe26d7ba8b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:36,771 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,771 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,772 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:36,775 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:36,775 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened a998cdea7295a266c95a5cb722f0c6bc; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1267071, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:36,776 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for a998cdea7295a266c95a5cb722f0c6bc: 2023-07-12 19:17:36,777 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc., pid=9, masterSystemTime=1689189456739 2023-07-12 19:17:36,778 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:36,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:36,780 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:36,780 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a998cdea7295a266c95a5cb722f0c6bc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:36,781 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189456780"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189456780"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189456780"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189456780"}]},"ts":"1689189456780"} 2023-07-12 19:17:36,786 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:36,787 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 21d409e1c713baa8e90655fe26d7ba8b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12004050400, jitterRate=0.11796431243419647}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:36,787 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 21d409e1c713baa8e90655fe26d7ba8b: 2023-07-12 19:17:36,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-12 19:17:36,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure a998cdea7295a266c95a5cb722f0c6bc, server=jenkins-hbase20.apache.org,42773,1689189455534 in 198 msec 2023-07-12 19:17:36,788 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b., pid=8, masterSystemTime=1689189456739 2023-07-12 19:17:36,792 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 19:17:36,792 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a998cdea7295a266c95a5cb722f0c6bc, ASSIGN in 214 msec 2023-07-12 19:17:36,793 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:36,793 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189456793"}]},"ts":"1689189456793"} 2023-07-12 19:17:36,796 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 19:17:36,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:36,799 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:36,800 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=21d409e1c713baa8e90655fe26d7ba8b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:36,800 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189456799"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189456799"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189456799"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189456799"}]},"ts":"1689189456799"} 2023-07-12 19:17:36,801 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:36,804 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 388 msec 2023-07-12 19:17:36,804 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-12 19:17:36,804 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 21d409e1c713baa8e90655fe26d7ba8b, server=jenkins-hbase20.apache.org,36109,1689189455374 in 218 msec 2023-07-12 19:17:36,806 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 19:17:36,806 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=21d409e1c713baa8e90655fe26d7ba8b, ASSIGN in 230 msec 2023-07-12 19:17:36,808 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:36,808 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189456808"}]},"ts":"1689189456808"} 2023-07-12 19:17:36,810 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 19:17:36,812 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:36,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 372 msec 2023-07-12 19:17:36,819 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 19:17:36,819 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 19:17:36,828 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:36,828 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:36,829 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 19:17:36,830 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,40539,1689189455229] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 19:17:36,847 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 19:17:36,848 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:36,849 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:36,853 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:36,857 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34128, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:36,862 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 19:17:36,872 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:36,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-12 19:17:36,884 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 19:17:36,898 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:36,902 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-12 19:17:36,908 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 19:17:36,909 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 19:17:36,909 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.290sec 2023-07-12 19:17:36,914 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-12 19:17:36,914 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:36,915 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ReadOnlyZKClient(139): Connect 0x28cba82d to 127.0.0.1:51847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:36,915 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-12 19:17:36,915 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-12 19:17:36,918 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:36,919 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:36,920 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-12 19:17:36,922 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:36,923 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89 empty. 2023-07-12 19:17:36,923 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:36,923 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-12 19:17:36,925 DEBUG [Listener at localhost.localdomain/37875] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3cb013c8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:36,927 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-12 19:17:36,927 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-12 19:17:36,929 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,929 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:36,929 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 19:17:36,929 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 19:17:36,930 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40539,1689189455229-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 19:17:36,930 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,40539,1689189455229-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 19:17:36,931 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 19:17:36,932 DEBUG [hconnection-0x37ba173c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:36,937 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48976, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:36,939 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:36,939 INFO [Listener at localhost.localdomain/37875] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:36,943 DEBUG [Listener at localhost.localdomain/37875] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 19:17:36,944 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:36,955 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9448d6dfe9afb70578e5490ef8dbac89, NAME => 'hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp 2023-07-12 19:17:36,961 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:38186, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 19:17:36,965 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 19:17:36,965 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:36,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-12 19:17:36,967 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ReadOnlyZKClient(139): Connect 0x79950ffb to 127.0.0.1:51847 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:36,991 DEBUG [Listener at localhost.localdomain/37875] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e71f402, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:36,992 INFO [Listener at localhost.localdomain/37875] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51847 2023-07-12 19:17:37,000 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:37,000 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 9448d6dfe9afb70578e5490ef8dbac89, disabling compactions & flushes 2023-07-12 19:17:37,000 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:37,000 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:37,000 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. after waiting 0 ms 2023-07-12 19:17:37,000 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:37,000 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:37,001 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 9448d6dfe9afb70578e5490ef8dbac89: 2023-07-12 19:17:37,001 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:37,003 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x100829e11e4000a connected 2023-07-12 19:17:37,005 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:37,006 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689189457006"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189457006"}]},"ts":"1689189457006"} 2023-07-12 19:17:37,008 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:37,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-12 19:17:37,010 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:37,011 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189457010"}]},"ts":"1689189457010"} 2023-07-12 19:17:37,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-12 19:17:37,013 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-12 19:17:37,016 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:37,016 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:37,016 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:37,016 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:37,016 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:37,016 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=9448d6dfe9afb70578e5490ef8dbac89, ASSIGN}] 2023-07-12 19:17:37,018 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=9448d6dfe9afb70578e5490ef8dbac89, ASSIGN 2023-07-12 19:17:37,019 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=9448d6dfe9afb70578e5490ef8dbac89, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38905,1689189455481; forceNewPlan=false, retain=false 2023-07-12 19:17:37,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-12 19:17:37,025 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:37,038 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 27 msec 2023-07-12 19:17:37,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-12 19:17:37,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:37,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-12 19:17:37,131 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:37,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-12 19:17:37,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:37,133 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:37,133 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 19:17:37,135 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:37,136 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,136 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7 empty. 2023-07-12 19:17:37,137 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,137 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 19:17:37,149 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:37,150 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => b017aedf5b1c91d6d896a4ea258e27a7, NAME => 'np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp 2023-07-12 19:17:37,160 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:37,160 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing b017aedf5b1c91d6d896a4ea258e27a7, disabling compactions & flushes 2023-07-12 19:17:37,160 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:37,160 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:37,160 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. after waiting 0 ms 2023-07-12 19:17:37,160 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:37,160 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:37,160 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for b017aedf5b1c91d6d896a4ea258e27a7: 2023-07-12 19:17:37,162 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:37,164 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189457163"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189457163"}]},"ts":"1689189457163"} 2023-07-12 19:17:37,165 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:37,166 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:37,166 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189457166"}]},"ts":"1689189457166"} 2023-07-12 19:17:37,167 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-12 19:17:37,169 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:37,169 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:37,169 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:37,169 INFO [jenkins-hbase20:40539] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:37,169 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:37,170 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:37,170 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=9448d6dfe9afb70578e5490ef8dbac89, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:37,170 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689189457170"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189457170"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189457170"}]},"ts":"1689189457170"} 2023-07-12 19:17:37,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=b017aedf5b1c91d6d896a4ea258e27a7, ASSIGN}] 2023-07-12 19:17:37,174 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=b017aedf5b1c91d6d896a4ea258e27a7, ASSIGN 2023-07-12 19:17:37,174 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=14, state=RUNNABLE; OpenRegionProcedure 9448d6dfe9afb70578e5490ef8dbac89, server=jenkins-hbase20.apache.org,38905,1689189455481}] 2023-07-12 19:17:37,175 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=b017aedf5b1c91d6d896a4ea258e27a7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38905,1689189455481; forceNewPlan=false, retain=false 2023-07-12 19:17:37,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:37,325 INFO [jenkins-hbase20:40539] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:37,326 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b017aedf5b1c91d6d896a4ea258e27a7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:37,326 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189457326"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189457326"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189457326"}]},"ts":"1689189457326"} 2023-07-12 19:17:37,328 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:37,328 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:37,329 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure b017aedf5b1c91d6d896a4ea258e27a7, server=jenkins-hbase20.apache.org,38905,1689189455481}] 2023-07-12 19:17:37,332 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56396, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:37,336 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:37,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9448d6dfe9afb70578e5490ef8dbac89, NAME => 'hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:37,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:37,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,336 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,338 INFO [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,339 DEBUG [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89/q 2023-07-12 19:17:37,339 DEBUG [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89/q 2023-07-12 19:17:37,339 INFO [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9448d6dfe9afb70578e5490ef8dbac89 columnFamilyName q 2023-07-12 19:17:37,340 INFO [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] regionserver.HStore(310): Store=9448d6dfe9afb70578e5490ef8dbac89/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:37,340 INFO [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,341 DEBUG [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89/u 2023-07-12 19:17:37,341 DEBUG [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89/u 2023-07-12 19:17:37,342 INFO [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9448d6dfe9afb70578e5490ef8dbac89 columnFamilyName u 2023-07-12 19:17:37,342 INFO [StoreOpener-9448d6dfe9afb70578e5490ef8dbac89-1] regionserver.HStore(310): Store=9448d6dfe9afb70578e5490ef8dbac89/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:37,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,344 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-12 19:17:37,346 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:37,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:37,350 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 9448d6dfe9afb70578e5490ef8dbac89; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9749821600, jitterRate=-0.09197710454463959}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-12 19:17:37,350 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 9448d6dfe9afb70578e5490ef8dbac89: 2023-07-12 19:17:37,351 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89., pid=17, masterSystemTime=1689189457328 2023-07-12 19:17:37,354 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:37,355 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:37,356 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=9448d6dfe9afb70578e5490ef8dbac89, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:37,356 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689189457355"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189457355"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189457355"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189457355"}]},"ts":"1689189457355"} 2023-07-12 19:17:37,358 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=14 2023-07-12 19:17:37,358 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=14, state=SUCCESS; OpenRegionProcedure 9448d6dfe9afb70578e5490ef8dbac89, server=jenkins-hbase20.apache.org,38905,1689189455481 in 183 msec 2023-07-12 19:17:37,359 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-12 19:17:37,359 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=9448d6dfe9afb70578e5490ef8dbac89, ASSIGN in 342 msec 2023-07-12 19:17:37,360 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:37,360 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189457360"}]},"ts":"1689189457360"} 2023-07-12 19:17:37,361 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-12 19:17:37,362 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:37,363 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 448 msec 2023-07-12 19:17:37,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:37,486 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:37,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b017aedf5b1c91d6d896a4ea258e27a7, NAME => 'np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:37,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:37,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,488 INFO [StoreOpener-b017aedf5b1c91d6d896a4ea258e27a7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,489 DEBUG [StoreOpener-b017aedf5b1c91d6d896a4ea258e27a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/fam1 2023-07-12 19:17:37,490 DEBUG [StoreOpener-b017aedf5b1c91d6d896a4ea258e27a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/fam1 2023-07-12 19:17:37,490 INFO [StoreOpener-b017aedf5b1c91d6d896a4ea258e27a7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b017aedf5b1c91d6d896a4ea258e27a7 columnFamilyName fam1 2023-07-12 19:17:37,490 INFO [StoreOpener-b017aedf5b1c91d6d896a4ea258e27a7-1] regionserver.HStore(310): Store=b017aedf5b1c91d6d896a4ea258e27a7/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:37,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:37,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:37,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened b017aedf5b1c91d6d896a4ea258e27a7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11568464960, jitterRate=0.07739725708961487}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:37,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for b017aedf5b1c91d6d896a4ea258e27a7: 2023-07-12 19:17:37,497 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7., pid=18, masterSystemTime=1689189457482 2023-07-12 19:17:37,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:37,498 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:37,499 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=b017aedf5b1c91d6d896a4ea258e27a7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:37,499 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189457499"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189457499"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189457499"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189457499"}]},"ts":"1689189457499"} 2023-07-12 19:17:37,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-07-12 19:17:37,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure b017aedf5b1c91d6d896a4ea258e27a7, server=jenkins-hbase20.apache.org,38905,1689189455481 in 171 msec 2023-07-12 19:17:37,502 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-12 19:17:37,502 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=b017aedf5b1c91d6d896a4ea258e27a7, ASSIGN in 331 msec 2023-07-12 19:17:37,503 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:37,503 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189457503"}]},"ts":"1689189457503"} 2023-07-12 19:17:37,504 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-12 19:17:37,505 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:37,506 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 379 msec 2023-07-12 19:17:37,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-12 19:17:37,735 INFO [Listener at localhost.localdomain/37875] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-12 19:17:37,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:37,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-12 19:17:37,740 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:37,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-12 19:17:37,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 19:17:37,757 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:37,760 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:56412, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:37,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=25 msec 2023-07-12 19:17:37,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 19:17:37,847 INFO [Listener at localhost.localdomain/37875] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-12 19:17:37,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:37,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:37,850 INFO [Listener at localhost.localdomain/37875] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-12 19:17:37,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable np1:table1 2023-07-12 19:17:37,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-12 19:17:37,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 19:17:37,854 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189457854"}]},"ts":"1689189457854"} 2023-07-12 19:17:37,855 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-12 19:17:37,856 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-12 19:17:37,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=b017aedf5b1c91d6d896a4ea258e27a7, UNASSIGN}] 2023-07-12 19:17:37,857 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=b017aedf5b1c91d6d896a4ea258e27a7, UNASSIGN 2023-07-12 19:17:37,858 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b017aedf5b1c91d6d896a4ea258e27a7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:37,858 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189457858"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189457858"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189457858"}]},"ts":"1689189457858"} 2023-07-12 19:17:37,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure b017aedf5b1c91d6d896a4ea258e27a7, server=jenkins-hbase20.apache.org,38905,1689189455481}] 2023-07-12 19:17:37,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 19:17:38,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:38,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing b017aedf5b1c91d6d896a4ea258e27a7, disabling compactions & flushes 2023-07-12 19:17:38,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:38,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:38,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. after waiting 0 ms 2023-07-12 19:17:38,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:38,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:38,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7. 2023-07-12 19:17:38,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for b017aedf5b1c91d6d896a4ea258e27a7: 2023-07-12 19:17:38,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:38,021 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b017aedf5b1c91d6d896a4ea258e27a7, regionState=CLOSED 2023-07-12 19:17:38,021 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189458021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189458021"}]},"ts":"1689189458021"} 2023-07-12 19:17:38,024 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-12 19:17:38,024 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure b017aedf5b1c91d6d896a4ea258e27a7, server=jenkins-hbase20.apache.org,38905,1689189455481 in 163 msec 2023-07-12 19:17:38,026 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-12 19:17:38,026 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=b017aedf5b1c91d6d896a4ea258e27a7, UNASSIGN in 167 msec 2023-07-12 19:17:38,034 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189458033"}]},"ts":"1689189458033"} 2023-07-12 19:17:38,035 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-12 19:17:38,036 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-12 19:17:38,038 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 186 msec 2023-07-12 19:17:38,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 19:17:38,157 INFO [Listener at localhost.localdomain/37875] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-12 19:17:38,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete np1:table1 2023-07-12 19:17:38,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-12 19:17:38,160 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 19:17:38,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-12 19:17:38,161 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 19:17:38,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:38,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 19:17:38,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 19:17:38,173 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:38,175 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/fam1, FileablePath, hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/recovered.edits] 2023-07-12 19:17:38,180 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/recovered.edits/4.seqid to hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/archive/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7/recovered.edits/4.seqid 2023-07-12 19:17:38,181 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/.tmp/data/np1/table1/b017aedf5b1c91d6d896a4ea258e27a7 2023-07-12 19:17:38,181 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-12 19:17:38,183 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 19:17:38,185 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-12 19:17:38,187 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-12 19:17:38,188 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 19:17:38,188 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-12 19:17:38,188 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189458188"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:38,190 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 19:17:38,190 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b017aedf5b1c91d6d896a4ea258e27a7, NAME => 'np1:table1,,1689189457126.b017aedf5b1c91d6d896a4ea258e27a7.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 19:17:38,190 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-12 19:17:38,190 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189458190"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:38,192 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-12 19:17:38,194 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-12 19:17:38,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 37 msec 2023-07-12 19:17:38,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-12 19:17:38,275 INFO [Listener at localhost.localdomain/37875] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-12 19:17:38,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete np1 2023-07-12 19:17:38,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-12 19:17:38,292 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 19:17:38,295 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 19:17:38,298 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 19:17:38,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 19:17:38,299 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-12 19:17:38,299 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:38,300 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 19:17:38,302 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-12 19:17:38,303 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 20 msec 2023-07-12 19:17:38,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40539] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-12 19:17:38,400 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 19:17:38,400 INFO [Listener at localhost.localdomain/37875] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 19:17:38,400 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28cba82d to 127.0.0.1:51847 2023-07-12 19:17:38,401 DEBUG [Listener at localhost.localdomain/37875] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,401 DEBUG [Listener at localhost.localdomain/37875] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 19:17:38,401 DEBUG [Listener at localhost.localdomain/37875] util.JVMClusterUtil(257): Found active master hash=807901536, stopped=false 2023-07-12 19:17:38,401 DEBUG [Listener at localhost.localdomain/37875] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 19:17:38,401 DEBUG [Listener at localhost.localdomain/37875] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 19:17:38,401 DEBUG [Listener at localhost.localdomain/37875] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-12 19:17:38,401 INFO [Listener at localhost.localdomain/37875] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:38,402 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:38,402 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:38,402 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:38,402 INFO [Listener at localhost.localdomain/37875] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 19:17:38,402 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:38,403 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:38,405 DEBUG [Listener at localhost.localdomain/37875] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a503f68 to 127.0.0.1:51847 2023-07-12 19:17:38,406 DEBUG [Listener at localhost.localdomain/37875] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,406 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,36109,1689189455374' ***** 2023-07-12 19:17:38,406 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:38,406 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,38905,1689189455481' ***** 2023-07-12 19:17:38,406 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:38,406 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,42773,1689189455534' ***** 2023-07-12 19:17:38,406 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:38,407 INFO [Listener at localhost.localdomain/37875] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:38,407 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:38,407 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:38,423 INFO [RS:1;jenkins-hbase20:38905] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@58cf4c89{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:38,423 INFO [RS:2;jenkins-hbase20:42773] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e153a1b{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:38,423 INFO [RS:0;jenkins-hbase20:36109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5b2a6fdb{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:38,423 INFO [RS:0;jenkins-hbase20:36109] server.AbstractConnector(383): Stopped ServerConnector@477f3f82{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:38,423 INFO [RS:0;jenkins-hbase20:36109] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:38,423 INFO [RS:2;jenkins-hbase20:42773] server.AbstractConnector(383): Stopped ServerConnector@2fe84205{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:38,423 INFO [RS:1;jenkins-hbase20:38905] server.AbstractConnector(383): Stopped ServerConnector@52ffdf29{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:38,424 INFO [RS:2;jenkins-hbase20:42773] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:38,424 INFO [RS:1;jenkins-hbase20:38905] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:38,427 INFO [RS:0;jenkins-hbase20:36109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ae4a868{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:38,427 INFO [RS:1;jenkins-hbase20:38905] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@54fa8dac{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:38,427 INFO [RS:0;jenkins-hbase20:36109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69a66844{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:38,427 INFO [RS:1;jenkins-hbase20:38905] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5b2db004{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:38,427 INFO [RS:2;jenkins-hbase20:42773] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2dd89fa0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:38,428 INFO [RS:2;jenkins-hbase20:42773] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2135cae7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:38,428 INFO [RS:1;jenkins-hbase20:38905] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:38,428 INFO [RS:0;jenkins-hbase20:36109] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:38,428 INFO [RS:1;jenkins-hbase20:38905] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:38,428 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:38,428 INFO [RS:1;jenkins-hbase20:38905] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:38,429 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(3305): Received CLOSE for 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:38,429 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:38,429 DEBUG [RS:1;jenkins-hbase20:38905] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x583cb745 to 127.0.0.1:51847 2023-07-12 19:17:38,428 INFO [RS:0;jenkins-hbase20:36109] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:38,434 DEBUG [RS:1;jenkins-hbase20:38905] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,434 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 19:17:38,434 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1478): Online Regions={9448d6dfe9afb70578e5490ef8dbac89=hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89.} 2023-07-12 19:17:38,435 DEBUG [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1504): Waiting on 9448d6dfe9afb70578e5490ef8dbac89 2023-07-12 19:17:38,434 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:38,434 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:38,430 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 9448d6dfe9afb70578e5490ef8dbac89, disabling compactions & flushes 2023-07-12 19:17:38,434 INFO [RS:2;jenkins-hbase20:42773] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:38,434 INFO [RS:0;jenkins-hbase20:36109] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:38,435 INFO [RS:2;jenkins-hbase20:42773] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:38,435 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(3305): Received CLOSE for 21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:38,435 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:38,435 INFO [RS:2;jenkins-hbase20:42773] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:38,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:38,435 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(3305): Received CLOSE for a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:38,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:38,435 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:38,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. after waiting 0 ms 2023-07-12 19:17:38,435 DEBUG [RS:0;jenkins-hbase20:36109] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x57ad86ce to 127.0.0.1:51847 2023-07-12 19:17:38,435 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:38,435 DEBUG [RS:0;jenkins-hbase20:36109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:38,435 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 19:17:38,435 DEBUG [RS:2;jenkins-hbase20:42773] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x33fa7b88 to 127.0.0.1:51847 2023-07-12 19:17:38,435 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1478): Online Regions={21d409e1c713baa8e90655fe26d7ba8b=hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b.} 2023-07-12 19:17:38,436 DEBUG [RS:2;jenkins-hbase20:42773] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,436 INFO [RS:2;jenkins-hbase20:42773] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:38,436 DEBUG [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1504): Waiting on 21d409e1c713baa8e90655fe26d7ba8b 2023-07-12 19:17:38,436 INFO [RS:2;jenkins-hbase20:42773] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:38,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 21d409e1c713baa8e90655fe26d7ba8b, disabling compactions & flushes 2023-07-12 19:17:38,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing a998cdea7295a266c95a5cb722f0c6bc, disabling compactions & flushes 2023-07-12 19:17:38,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:38,438 INFO [RS:2;jenkins-hbase20:42773] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:38,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:38,438 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 19:17:38,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:38,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. after waiting 0 ms 2023-07-12 19:17:38,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:38,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:38,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. after waiting 0 ms 2023-07-12 19:17:38,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 21d409e1c713baa8e90655fe26d7ba8b 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-12 19:17:38,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:38,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing a998cdea7295a266c95a5cb722f0c6bc 1/1 column families, dataSize=594 B heapSize=1.05 KB 2023-07-12 19:17:38,439 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-12 19:17:38,439 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1478): Online Regions={a998cdea7295a266c95a5cb722f0c6bc=hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc., 1588230740=hbase:meta,,1.1588230740} 2023-07-12 19:17:38,439 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1504): Waiting on 1588230740, a998cdea7295a266c95a5cb722f0c6bc 2023-07-12 19:17:38,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 19:17:38,442 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 19:17:38,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 19:17:38,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 19:17:38,442 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 19:17:38,442 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.90 KB heapSize=11.10 KB 2023-07-12 19:17:38,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/quota/9448d6dfe9afb70578e5490ef8dbac89/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:38,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:38,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 9448d6dfe9afb70578e5490ef8dbac89: 2023-07-12 19:17:38,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689189456914.9448d6dfe9afb70578e5490ef8dbac89. 2023-07-12 19:17:38,502 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:38,504 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:38,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:38,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:38,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:38,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:38,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/.tmp/info/0f5a107551144283af7f40005b3d3dda 2023-07-12 19:17:38,568 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.27 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/.tmp/info/c6333a21e59e4b6a99a63a57e122154d 2023-07-12 19:17:38,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=594 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/.tmp/m/8393d3dd39dc4b93bd6831fe43c9cb12 2023-07-12 19:17:38,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f5a107551144283af7f40005b3d3dda 2023-07-12 19:17:38,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/.tmp/info/0f5a107551144283af7f40005b3d3dda as hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/info/0f5a107551144283af7f40005b3d3dda 2023-07-12 19:17:38,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6333a21e59e4b6a99a63a57e122154d 2023-07-12 19:17:38,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/.tmp/m/8393d3dd39dc4b93bd6831fe43c9cb12 as hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/m/8393d3dd39dc4b93bd6831fe43c9cb12 2023-07-12 19:17:38,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0f5a107551144283af7f40005b3d3dda 2023-07-12 19:17:38,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/info/0f5a107551144283af7f40005b3d3dda, entries=3, sequenceid=8, filesize=5.0 K 2023-07-12 19:17:38,590 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 21d409e1c713baa8e90655fe26d7ba8b in 151ms, sequenceid=8, compaction requested=false 2023-07-12 19:17:38,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/m/8393d3dd39dc4b93bd6831fe43c9cb12, entries=1, sequenceid=7, filesize=4.9 K 2023-07-12 19:17:38,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~594 B/594, heapSize ~1.04 KB/1064, currentSize=0 B/0 for a998cdea7295a266c95a5cb722f0c6bc in 165ms, sequenceid=7, compaction requested=false 2023-07-12 19:17:38,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/namespace/21d409e1c713baa8e90655fe26d7ba8b/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-12 19:17:38,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:38,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 21d409e1c713baa8e90655fe26d7ba8b: 2023-07-12 19:17:38,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689189456440.21d409e1c713baa8e90655fe26d7ba8b. 2023-07-12 19:17:38,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/rsgroup/a998cdea7295a266c95a5cb722f0c6bc/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-12 19:17:38,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:38,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:38,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for a998cdea7295a266c95a5cb722f0c6bc: 2023-07-12 19:17:38,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689189456413.a998cdea7295a266c95a5cb722f0c6bc. 2023-07-12 19:17:38,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/.tmp/rep_barrier/0798fbbabb1f46b7a165e6cd9d2ddfd1 2023-07-12 19:17:38,625 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0798fbbabb1f46b7a165e6cd9d2ddfd1 2023-07-12 19:17:38,635 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38905,1689189455481; all regions closed. 2023-07-12 19:17:38,635 DEBUG [RS:1;jenkins-hbase20:38905] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 19:17:38,638 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36109,1689189455374; all regions closed. 2023-07-12 19:17:38,638 DEBUG [RS:0;jenkins-hbase20:36109] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 19:17:38,643 DEBUG [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 19:17:38,643 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,38905,1689189455481/jenkins-hbase20.apache.org%2C38905%2C1689189455481.1689189456136 not finished, retry = 0 2023-07-12 19:17:38,651 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/WALs/jenkins-hbase20.apache.org,36109,1689189455374/jenkins-hbase20.apache.org%2C36109%2C1689189455374.1689189456119 not finished, retry = 0 2023-07-12 19:17:38,664 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/.tmp/table/76b046ba09ca447cb44e6e5b0542d9b7 2023-07-12 19:17:38,670 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 76b046ba09ca447cb44e6e5b0542d9b7 2023-07-12 19:17:38,671 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/.tmp/info/c6333a21e59e4b6a99a63a57e122154d as hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/info/c6333a21e59e4b6a99a63a57e122154d 2023-07-12 19:17:38,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6333a21e59e4b6a99a63a57e122154d 2023-07-12 19:17:38,679 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/info/c6333a21e59e4b6a99a63a57e122154d, entries=32, sequenceid=31, filesize=8.5 K 2023-07-12 19:17:38,682 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/.tmp/rep_barrier/0798fbbabb1f46b7a165e6cd9d2ddfd1 as hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/rep_barrier/0798fbbabb1f46b7a165e6cd9d2ddfd1 2023-07-12 19:17:38,690 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0798fbbabb1f46b7a165e6cd9d2ddfd1 2023-07-12 19:17:38,690 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/rep_barrier/0798fbbabb1f46b7a165e6cd9d2ddfd1, entries=1, sequenceid=31, filesize=4.9 K 2023-07-12 19:17:38,691 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/.tmp/table/76b046ba09ca447cb44e6e5b0542d9b7 as hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/table/76b046ba09ca447cb44e6e5b0542d9b7 2023-07-12 19:17:38,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 76b046ba09ca447cb44e6e5b0542d9b7 2023-07-12 19:17:38,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/table/76b046ba09ca447cb44e6e5b0542d9b7, entries=8, sequenceid=31, filesize=5.2 K 2023-07-12 19:17:38,699 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.90 KB/6045, heapSize ~11.05 KB/11320, currentSize=0 B/0 for 1588230740 in 257ms, sequenceid=31, compaction requested=false 2023-07-12 19:17:38,715 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-12 19:17:38,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:38,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:38,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 19:17:38,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:38,747 DEBUG [RS:1;jenkins-hbase20:38905] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs 2023-07-12 19:17:38,747 INFO [RS:1;jenkins-hbase20:38905] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C38905%2C1689189455481:(num 1689189456136) 2023-07-12 19:17:38,747 DEBUG [RS:1;jenkins-hbase20:38905] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,747 INFO [RS:1;jenkins-hbase20:38905] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:38,748 INFO [RS:1;jenkins-hbase20:38905] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:38,748 INFO [RS:1;jenkins-hbase20:38905] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:38,748 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:38,748 INFO [RS:1;jenkins-hbase20:38905] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:38,748 INFO [RS:1;jenkins-hbase20:38905] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:38,749 INFO [RS:1;jenkins-hbase20:38905] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38905 2023-07-12 19:17:38,752 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:38,752 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:38,752 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:38,752 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:38,752 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38905,1689189455481 2023-07-12 19:17:38,752 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:38,752 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:38,756 DEBUG [RS:0;jenkins-hbase20:36109] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs 2023-07-12 19:17:38,756 INFO [RS:0;jenkins-hbase20:36109] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C36109%2C1689189455374:(num 1689189456119) 2023-07-12 19:17:38,756 DEBUG [RS:0;jenkins-hbase20:36109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,756 INFO [RS:0;jenkins-hbase20:36109] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:38,756 INFO [RS:0;jenkins-hbase20:36109] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:38,756 INFO [RS:0;jenkins-hbase20:36109] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:38,756 INFO [RS:0;jenkins-hbase20:36109] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:38,757 INFO [RS:0;jenkins-hbase20:36109] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:38,756 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:38,758 INFO [RS:0;jenkins-hbase20:36109] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36109 2023-07-12 19:17:38,762 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38905,1689189455481] 2023-07-12 19:17:38,762 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38905,1689189455481; numProcessing=1 2023-07-12 19:17:38,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:38,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36109,1689189455374 2023-07-12 19:17:38,766 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:38,766 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38905,1689189455481 already deleted, retry=false 2023-07-12 19:17:38,767 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38905,1689189455481 expired; onlineServers=2 2023-07-12 19:17:38,767 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36109,1689189455374] 2023-07-12 19:17:38,767 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36109,1689189455374; numProcessing=2 2023-07-12 19:17:38,768 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36109,1689189455374 already deleted, retry=false 2023-07-12 19:17:38,768 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36109,1689189455374 expired; onlineServers=1 2023-07-12 19:17:38,843 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,42773,1689189455534; all regions closed. 2023-07-12 19:17:38,843 DEBUG [RS:2;jenkins-hbase20:42773] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-12 19:17:38,849 DEBUG [RS:2;jenkins-hbase20:42773] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs 2023-07-12 19:17:38,850 INFO [RS:2;jenkins-hbase20:42773] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C42773%2C1689189455534.meta:.meta(num 1689189456302) 2023-07-12 19:17:38,853 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:38,853 INFO [RS:1;jenkins-hbase20:38905] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38905,1689189455481; zookeeper connection closed. 2023-07-12 19:17:38,853 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:38905-0x100829e11e40002, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:38,854 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@8ca8ec1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@8ca8ec1 2023-07-12 19:17:38,856 DEBUG [RS:2;jenkins-hbase20:42773] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/oldWALs 2023-07-12 19:17:38,856 INFO [RS:2;jenkins-hbase20:42773] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C42773%2C1689189455534:(num 1689189456147) 2023-07-12 19:17:38,856 DEBUG [RS:2;jenkins-hbase20:42773] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,856 INFO [RS:2;jenkins-hbase20:42773] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:38,856 INFO [RS:2;jenkins-hbase20:42773] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:38,856 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:38,857 INFO [RS:2;jenkins-hbase20:42773] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:42773 2023-07-12 19:17:38,871 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:38,871 INFO [RS:0;jenkins-hbase20:36109] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36109,1689189455374; zookeeper connection closed. 2023-07-12 19:17:38,872 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:36109-0x100829e11e40001, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:38,873 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@56034285] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@56034285 2023-07-12 19:17:38,874 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,42773,1689189455534 2023-07-12 19:17:38,874 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:38,875 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,42773,1689189455534] 2023-07-12 19:17:38,875 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,42773,1689189455534; numProcessing=3 2023-07-12 19:17:38,875 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,42773,1689189455534 already deleted, retry=false 2023-07-12 19:17:38,875 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,42773,1689189455534 expired; onlineServers=0 2023-07-12 19:17:38,875 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,40539,1689189455229' ***** 2023-07-12 19:17:38,875 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 19:17:38,876 DEBUG [M:0;jenkins-hbase20:40539] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34181bbc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:38,876 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:38,878 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:38,878 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:38,878 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:38,878 INFO [M:0;jenkins-hbase20:40539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1c9fa2d2{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 19:17:38,879 INFO [M:0;jenkins-hbase20:40539] server.AbstractConnector(383): Stopped ServerConnector@4c68eabb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:38,879 INFO [M:0;jenkins-hbase20:40539] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:38,879 INFO [M:0;jenkins-hbase20:40539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c44c63b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:38,879 INFO [M:0;jenkins-hbase20:40539] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7ea17ba6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:38,880 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,40539,1689189455229 2023-07-12 19:17:38,880 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,40539,1689189455229; all regions closed. 2023-07-12 19:17:38,880 DEBUG [M:0;jenkins-hbase20:40539] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:38,880 INFO [M:0;jenkins-hbase20:40539] master.HMaster(1491): Stopping master jetty server 2023-07-12 19:17:38,880 INFO [M:0;jenkins-hbase20:40539] server.AbstractConnector(383): Stopped ServerConnector@37e1d235{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:38,881 DEBUG [M:0;jenkins-hbase20:40539] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 19:17:38,881 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 19:17:38,881 DEBUG [M:0;jenkins-hbase20:40539] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 19:17:38,881 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189455839] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189455839,5,FailOnTimeoutGroup] 2023-07-12 19:17:38,881 INFO [M:0;jenkins-hbase20:40539] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 19:17:38,882 INFO [M:0;jenkins-hbase20:40539] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 19:17:38,881 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189455835] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189455835,5,FailOnTimeoutGroup] 2023-07-12 19:17:38,882 INFO [M:0;jenkins-hbase20:40539] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:38,882 DEBUG [M:0;jenkins-hbase20:40539] master.HMaster(1512): Stopping service threads 2023-07-12 19:17:38,883 INFO [M:0;jenkins-hbase20:40539] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 19:17:38,883 ERROR [M:0;jenkins-hbase20:40539] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 19:17:38,883 INFO [M:0;jenkins-hbase20:40539] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 19:17:38,883 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 19:17:38,884 DEBUG [M:0;jenkins-hbase20:40539] zookeeper.ZKUtil(398): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 19:17:38,884 WARN [M:0;jenkins-hbase20:40539] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 19:17:38,884 INFO [M:0;jenkins-hbase20:40539] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 19:17:38,884 INFO [M:0;jenkins-hbase20:40539] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 19:17:38,884 DEBUG [M:0;jenkins-hbase20:40539] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 19:17:38,884 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:38,884 DEBUG [M:0;jenkins-hbase20:40539] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:38,884 DEBUG [M:0;jenkins-hbase20:40539] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 19:17:38,884 DEBUG [M:0;jenkins-hbase20:40539] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:38,884 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.05 KB heapSize=109.20 KB 2023-07-12 19:17:38,898 INFO [M:0;jenkins-hbase20:40539] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.05 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/338b6f244fd143f592404ad518aaac9d 2023-07-12 19:17:38,904 DEBUG [M:0;jenkins-hbase20:40539] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/338b6f244fd143f592404ad518aaac9d as hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/338b6f244fd143f592404ad518aaac9d 2023-07-12 19:17:38,910 INFO [M:0;jenkins-hbase20:40539] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38007/user/jenkins/test-data/171d5f8e-5f45-9a5a-0a3b-0ee09af95a3e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/338b6f244fd143f592404ad518aaac9d, entries=24, sequenceid=194, filesize=12.4 K 2023-07-12 19:17:38,910 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegion(2948): Finished flush of dataSize ~93.05 KB/95284, heapSize ~109.19 KB/111808, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=194, compaction requested=false 2023-07-12 19:17:38,912 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:38,912 DEBUG [M:0;jenkins-hbase20:40539] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:38,917 INFO [M:0;jenkins-hbase20:40539] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 19:17:38,917 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:38,918 INFO [M:0;jenkins-hbase20:40539] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:40539 2023-07-12 19:17:38,919 DEBUG [M:0;jenkins-hbase20:40539] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,40539,1689189455229 already deleted, retry=false 2023-07-12 19:17:38,975 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:38,975 INFO [RS:2;jenkins-hbase20:42773] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,42773,1689189455534; zookeeper connection closed. 2023-07-12 19:17:38,975 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): regionserver:42773-0x100829e11e40003, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:38,975 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6df4bd70] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6df4bd70 2023-07-12 19:17:38,975 INFO [Listener at localhost.localdomain/37875] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-12 19:17:39,075 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:39,075 INFO [M:0;jenkins-hbase20:40539] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,40539,1689189455229; zookeeper connection closed. 2023-07-12 19:17:39,075 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): master:40539-0x100829e11e40000, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:39,076 WARN [Listener at localhost.localdomain/37875] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:39,080 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:39,185 WARN [BP-1502862989-148.251.75.209-1689189454435 heartbeating to localhost.localdomain/127.0.0.1:38007] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:39,185 WARN [BP-1502862989-148.251.75.209-1689189454435 heartbeating to localhost.localdomain/127.0.0.1:38007] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1502862989-148.251.75.209-1689189454435 (Datanode Uuid 67822a5b-6b2d-4bcc-80f8-a196403b515b) service to localhost.localdomain/127.0.0.1:38007 2023-07-12 19:17:39,186 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/dfs/data/data5/current/BP-1502862989-148.251.75.209-1689189454435] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:39,186 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/dfs/data/data6/current/BP-1502862989-148.251.75.209-1689189454435] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:39,188 WARN [Listener at localhost.localdomain/37875] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:39,192 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:39,296 WARN [BP-1502862989-148.251.75.209-1689189454435 heartbeating to localhost.localdomain/127.0.0.1:38007] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:39,296 WARN [BP-1502862989-148.251.75.209-1689189454435 heartbeating to localhost.localdomain/127.0.0.1:38007] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1502862989-148.251.75.209-1689189454435 (Datanode Uuid 7d35d8fb-23cc-4979-b994-c57640d611f6) service to localhost.localdomain/127.0.0.1:38007 2023-07-12 19:17:39,297 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/dfs/data/data3/current/BP-1502862989-148.251.75.209-1689189454435] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:39,297 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/dfs/data/data4/current/BP-1502862989-148.251.75.209-1689189454435] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:39,300 WARN [Listener at localhost.localdomain/37875] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:39,414 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:39,519 WARN [BP-1502862989-148.251.75.209-1689189454435 heartbeating to localhost.localdomain/127.0.0.1:38007] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:39,519 WARN [BP-1502862989-148.251.75.209-1689189454435 heartbeating to localhost.localdomain/127.0.0.1:38007] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1502862989-148.251.75.209-1689189454435 (Datanode Uuid 4570a816-a826-4ebd-81f1-c9aa10d62f4f) service to localhost.localdomain/127.0.0.1:38007 2023-07-12 19:17:39,520 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/dfs/data/data1/current/BP-1502862989-148.251.75.209-1689189454435] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:39,521 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/cluster_0ac67c39-3f7a-a514-5ccd-24446b69702a/dfs/data/data2/current/BP-1502862989-148.251.75.209-1689189454435] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:39,546 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-12 19:17:39,546 WARN [24591313@qtp-2100103808-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36789] http.HttpServer2$SelectChannelConnectorWithSafeStartup(546): HttpServer Acceptor: isRunning is false. Rechecking. 2023-07-12 19:17:39,548 WARN [24591313@qtp-2100103808-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:36789] http.HttpServer2$SelectChannelConnectorWithSafeStartup(555): HttpServer Acceptor: isRunning is false 2023-07-12 19:17:39,574 INFO [Listener at localhost.localdomain/37875] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 19:17:39,619 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-12 19:17:39,619 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-12 19:17:39,619 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.log.dir so I do NOT create it in target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df 2023-07-12 19:17:39,619 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d9099bf2-01da-4522-5f69-2ec5a570bb0d/hadoop.tmp.dir so I do NOT create it in target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df 2023-07-12 19:17:39,619 INFO [Listener at localhost.localdomain/37875] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb, deleteOnExit=true 2023-07-12 19:17:39,619 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-12 19:17:39,619 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/test.cache.data in system properties and HBase conf 2023-07-12 19:17:39,620 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.tmp.dir in system properties and HBase conf 2023-07-12 19:17:39,620 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir in system properties and HBase conf 2023-07-12 19:17:39,620 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-12 19:17:39,620 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-12 19:17:39,620 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-12 19:17:39,620 DEBUG [Listener at localhost.localdomain/37875] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-12 19:17:39,621 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-12 19:17:39,621 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-12 19:17:39,621 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-12 19:17:39,621 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 19:17:39,621 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-12 19:17:39,622 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-12 19:17:39,622 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-12 19:17:39,622 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 19:17:39,622 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-12 19:17:39,622 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/nfs.dump.dir in system properties and HBase conf 2023-07-12 19:17:39,622 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/java.io.tmpdir in system properties and HBase conf 2023-07-12 19:17:39,622 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-12 19:17:39,623 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-12 19:17:39,623 INFO [Listener at localhost.localdomain/37875] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-12 19:17:39,626 WARN [Listener at localhost.localdomain/37875] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 19:17:39,626 WARN [Listener at localhost.localdomain/37875] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 19:17:39,654 WARN [Listener at localhost.localdomain/37875] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-12 19:17:39,672 DEBUG [Listener at localhost.localdomain/37875-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x100829e11e4000a, quorum=127.0.0.1:51847, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-12 19:17:39,672 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x100829e11e4000a, quorum=127.0.0.1:51847, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-12 19:17:39,697 WARN [Listener at localhost.localdomain/37875] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:39,699 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:39,714 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/java.io.tmpdir/Jetty_localhost_localdomain_43799_hdfs____.rk5iem/webapp 2023-07-12 19:17:39,801 INFO [Listener at localhost.localdomain/37875] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43799 2023-07-12 19:17:39,805 WARN [Listener at localhost.localdomain/37875] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-12 19:17:39,805 WARN [Listener at localhost.localdomain/37875] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-12 19:17:39,835 WARN [Listener at localhost.localdomain/33609] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:39,852 WARN [Listener at localhost.localdomain/33609] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:39,854 WARN [Listener at localhost.localdomain/33609] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:39,855 INFO [Listener at localhost.localdomain/33609] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:39,865 INFO [Listener at localhost.localdomain/33609] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/java.io.tmpdir/Jetty_localhost_44961_datanode____6obh23/webapp 2023-07-12 19:17:39,969 INFO [Listener at localhost.localdomain/33609] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44961 2023-07-12 19:17:39,980 WARN [Listener at localhost.localdomain/36233] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:40,027 WARN [Listener at localhost.localdomain/36233] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:40,030 WARN [Listener at localhost.localdomain/36233] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:40,032 INFO [Listener at localhost.localdomain/36233] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:40,036 INFO [Listener at localhost.localdomain/36233] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/java.io.tmpdir/Jetty_localhost_35929_datanode____.tiutmj/webapp 2023-07-12 19:17:40,074 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x86b182ead9e57d33: Processing first storage report for DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2 from datanode 3b9e3151-5102-4a61-8446-70808c09da13 2023-07-12 19:17:40,074 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x86b182ead9e57d33: from storage DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2 node DatanodeRegistration(127.0.0.1:43191, datanodeUuid=3b9e3151-5102-4a61-8446-70808c09da13, infoPort=44977, infoSecurePort=0, ipcPort=36233, storageInfo=lv=-57;cid=testClusterID;nsid=2090582121;c=1689189459628), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:40,074 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x86b182ead9e57d33: Processing first storage report for DS-d1aff6bd-3dc7-4754-a285-9594d1541137 from datanode 3b9e3151-5102-4a61-8446-70808c09da13 2023-07-12 19:17:40,074 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x86b182ead9e57d33: from storage DS-d1aff6bd-3dc7-4754-a285-9594d1541137 node DatanodeRegistration(127.0.0.1:43191, datanodeUuid=3b9e3151-5102-4a61-8446-70808c09da13, infoPort=44977, infoSecurePort=0, ipcPort=36233, storageInfo=lv=-57;cid=testClusterID;nsid=2090582121;c=1689189459628), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:40,129 INFO [Listener at localhost.localdomain/36233] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35929 2023-07-12 19:17:40,137 WARN [Listener at localhost.localdomain/35741] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:40,151 WARN [Listener at localhost.localdomain/35741] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-12 19:17:40,155 WARN [Listener at localhost.localdomain/35741] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-12 19:17:40,156 INFO [Listener at localhost.localdomain/35741] log.Slf4jLog(67): jetty-6.1.26 2023-07-12 19:17:40,165 INFO [Listener at localhost.localdomain/35741] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/java.io.tmpdir/Jetty_localhost_36671_datanode____qb8cdw/webapp 2023-07-12 19:17:40,217 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd9c6543b4bfee124: Processing first storage report for DS-b88c3966-5697-4dc1-92ea-862ca1b952ab from datanode 7e65f84b-b291-4657-bc00-13657b48b0d9 2023-07-12 19:17:40,217 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd9c6543b4bfee124: from storage DS-b88c3966-5697-4dc1-92ea-862ca1b952ab node DatanodeRegistration(127.0.0.1:42583, datanodeUuid=7e65f84b-b291-4657-bc00-13657b48b0d9, infoPort=42873, infoSecurePort=0, ipcPort=35741, storageInfo=lv=-57;cid=testClusterID;nsid=2090582121;c=1689189459628), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:40,217 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd9c6543b4bfee124: Processing first storage report for DS-910057b3-bcaf-46fb-8e6c-db5ed066c92e from datanode 7e65f84b-b291-4657-bc00-13657b48b0d9 2023-07-12 19:17:40,217 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd9c6543b4bfee124: from storage DS-910057b3-bcaf-46fb-8e6c-db5ed066c92e node DatanodeRegistration(127.0.0.1:42583, datanodeUuid=7e65f84b-b291-4657-bc00-13657b48b0d9, infoPort=42873, infoSecurePort=0, ipcPort=35741, storageInfo=lv=-57;cid=testClusterID;nsid=2090582121;c=1689189459628), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:40,267 INFO [Listener at localhost.localdomain/35741] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36671 2023-07-12 19:17:40,276 WARN [Listener at localhost.localdomain/40989] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-12 19:17:40,350 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4770b6a32bc0c5d: Processing first storage report for DS-de90d26e-4113-4189-9ccb-c295550dc9c5 from datanode d00c467b-11db-4a39-af19-bc889094389b 2023-07-12 19:17:40,350 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4770b6a32bc0c5d: from storage DS-de90d26e-4113-4189-9ccb-c295550dc9c5 node DatanodeRegistration(127.0.0.1:42401, datanodeUuid=d00c467b-11db-4a39-af19-bc889094389b, infoPort=40299, infoSecurePort=0, ipcPort=40989, storageInfo=lv=-57;cid=testClusterID;nsid=2090582121;c=1689189459628), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:40,350 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4770b6a32bc0c5d: Processing first storage report for DS-d75d8c7e-9454-42ac-b61b-7a4da1486656 from datanode d00c467b-11db-4a39-af19-bc889094389b 2023-07-12 19:17:40,350 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4770b6a32bc0c5d: from storage DS-d75d8c7e-9454-42ac-b61b-7a4da1486656 node DatanodeRegistration(127.0.0.1:42401, datanodeUuid=d00c467b-11db-4a39-af19-bc889094389b, infoPort=40299, infoSecurePort=0, ipcPort=40989, storageInfo=lv=-57;cid=testClusterID;nsid=2090582121;c=1689189459628), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-12 19:17:40,408 DEBUG [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df 2023-07-12 19:17:40,413 INFO [Listener at localhost.localdomain/40989] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/zookeeper_0, clientPort=50438, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-12 19:17:40,414 INFO [Listener at localhost.localdomain/40989] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50438 2023-07-12 19:17:40,414 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,415 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,435 INFO [Listener at localhost.localdomain/40989] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96 with version=8 2023-07-12 19:17:40,435 INFO [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:43233/user/jenkins/test-data/de343eff-f44d-2323-159b-9a08e2c45fb0/hbase-staging 2023-07-12 19:17:40,437 DEBUG [Listener at localhost.localdomain/40989] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-12 19:17:40,437 DEBUG [Listener at localhost.localdomain/40989] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-12 19:17:40,437 DEBUG [Listener at localhost.localdomain/40989] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-12 19:17:40,437 DEBUG [Listener at localhost.localdomain/40989] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-12 19:17:40,438 INFO [Listener at localhost.localdomain/40989] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:40,438 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,438 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,438 INFO [Listener at localhost.localdomain/40989] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:40,438 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,438 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:40,438 INFO [Listener at localhost.localdomain/40989] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:40,440 INFO [Listener at localhost.localdomain/40989] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33451 2023-07-12 19:17:40,441 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,442 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,443 INFO [Listener at localhost.localdomain/40989] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33451 connecting to ZooKeeper ensemble=127.0.0.1:50438 2023-07-12 19:17:40,455 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:334510x0, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:40,459 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33451-0x100829e263a0000 connected 2023-07-12 19:17:40,482 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:40,482 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:40,482 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:40,489 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33451 2023-07-12 19:17:40,489 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33451 2023-07-12 19:17:40,489 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33451 2023-07-12 19:17:40,491 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33451 2023-07-12 19:17:40,492 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33451 2023-07-12 19:17:40,495 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:40,495 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:40,495 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:40,495 INFO [Listener at localhost.localdomain/40989] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-12 19:17:40,495 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:40,495 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:40,496 INFO [Listener at localhost.localdomain/40989] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:40,496 INFO [Listener at localhost.localdomain/40989] http.HttpServer(1146): Jetty bound to port 37363 2023-07-12 19:17:40,496 INFO [Listener at localhost.localdomain/40989] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:40,502 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,503 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@63f0285f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:40,503 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,503 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d11b748{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:40,510 INFO [Listener at localhost.localdomain/40989] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:40,512 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:40,512 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:40,512 INFO [Listener at localhost.localdomain/40989] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 19:17:40,514 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,515 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7bb54568{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 19:17:40,516 INFO [Listener at localhost.localdomain/40989] server.AbstractConnector(333): Started ServerConnector@48b9cd26{HTTP/1.1, (http/1.1)}{0.0.0.0:37363} 2023-07-12 19:17:40,516 INFO [Listener at localhost.localdomain/40989] server.Server(415): Started @42489ms 2023-07-12 19:17:40,516 INFO [Listener at localhost.localdomain/40989] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96, hbase.cluster.distributed=false 2023-07-12 19:17:40,533 INFO [Listener at localhost.localdomain/40989] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:40,533 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,533 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,533 INFO [Listener at localhost.localdomain/40989] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:40,533 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,534 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:40,534 INFO [Listener at localhost.localdomain/40989] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:40,536 INFO [Listener at localhost.localdomain/40989] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38393 2023-07-12 19:17:40,537 INFO [Listener at localhost.localdomain/40989] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:40,541 DEBUG [Listener at localhost.localdomain/40989] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:40,542 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,544 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,545 INFO [Listener at localhost.localdomain/40989] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38393 connecting to ZooKeeper ensemble=127.0.0.1:50438 2023-07-12 19:17:40,550 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:383930x0, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:40,552 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:383930x0, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:40,552 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38393-0x100829e263a0001 connected 2023-07-12 19:17:40,553 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:40,553 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:40,557 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38393 2023-07-12 19:17:40,562 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38393 2023-07-12 19:17:40,563 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38393 2023-07-12 19:17:40,564 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38393 2023-07-12 19:17:40,564 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38393 2023-07-12 19:17:40,566 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:40,567 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:40,567 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:40,568 INFO [Listener at localhost.localdomain/40989] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:40,568 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:40,568 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:40,568 INFO [Listener at localhost.localdomain/40989] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:40,569 INFO [Listener at localhost.localdomain/40989] http.HttpServer(1146): Jetty bound to port 45991 2023-07-12 19:17:40,569 INFO [Listener at localhost.localdomain/40989] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:40,573 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,574 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6cf75b18{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:40,574 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,575 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f62e854{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:40,582 INFO [Listener at localhost.localdomain/40989] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:40,583 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:40,583 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:40,584 INFO [Listener at localhost.localdomain/40989] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 19:17:40,585 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,586 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@a6b2912{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:40,589 INFO [Listener at localhost.localdomain/40989] server.AbstractConnector(333): Started ServerConnector@181666e{HTTP/1.1, (http/1.1)}{0.0.0.0:45991} 2023-07-12 19:17:40,589 INFO [Listener at localhost.localdomain/40989] server.Server(415): Started @42561ms 2023-07-12 19:17:40,612 INFO [Listener at localhost.localdomain/40989] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:40,612 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,612 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,613 INFO [Listener at localhost.localdomain/40989] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:40,613 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,613 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:40,613 INFO [Listener at localhost.localdomain/40989] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:40,616 INFO [Listener at localhost.localdomain/40989] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33397 2023-07-12 19:17:40,616 INFO [Listener at localhost.localdomain/40989] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:40,630 DEBUG [Listener at localhost.localdomain/40989] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:40,631 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,632 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,633 INFO [Listener at localhost.localdomain/40989] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33397 connecting to ZooKeeper ensemble=127.0.0.1:50438 2023-07-12 19:17:40,636 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:333970x0, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:40,638 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:333970x0, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:40,638 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33397-0x100829e263a0002 connected 2023-07-12 19:17:40,638 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:40,639 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:40,642 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33397 2023-07-12 19:17:40,644 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33397 2023-07-12 19:17:40,644 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33397 2023-07-12 19:17:40,647 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33397 2023-07-12 19:17:40,647 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33397 2023-07-12 19:17:40,649 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:40,649 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:40,649 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:40,650 INFO [Listener at localhost.localdomain/40989] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:40,650 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:40,650 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:40,650 INFO [Listener at localhost.localdomain/40989] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:40,650 INFO [Listener at localhost.localdomain/40989] http.HttpServer(1146): Jetty bound to port 42455 2023-07-12 19:17:40,650 INFO [Listener at localhost.localdomain/40989] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:40,656 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,656 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3442f331{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:40,657 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,657 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@77f2188c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:40,664 INFO [Listener at localhost.localdomain/40989] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:40,665 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:40,665 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:40,665 INFO [Listener at localhost.localdomain/40989] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:40,666 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,667 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6db27505{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:40,669 INFO [Listener at localhost.localdomain/40989] server.AbstractConnector(333): Started ServerConnector@4d47d773{HTTP/1.1, (http/1.1)}{0.0.0.0:42455} 2023-07-12 19:17:40,669 INFO [Listener at localhost.localdomain/40989] server.Server(415): Started @42642ms 2023-07-12 19:17:40,679 INFO [Listener at localhost.localdomain/40989] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:40,679 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,679 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,679 INFO [Listener at localhost.localdomain/40989] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:40,679 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:40,680 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:40,680 INFO [Listener at localhost.localdomain/40989] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:40,681 INFO [Listener at localhost.localdomain/40989] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46241 2023-07-12 19:17:40,682 INFO [Listener at localhost.localdomain/40989] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:40,684 DEBUG [Listener at localhost.localdomain/40989] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:40,685 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,686 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,686 INFO [Listener at localhost.localdomain/40989] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46241 connecting to ZooKeeper ensemble=127.0.0.1:50438 2023-07-12 19:17:40,690 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:462410x0, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:40,692 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:462410x0, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:40,692 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46241-0x100829e263a0003 connected 2023-07-12 19:17:40,692 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:40,693 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:40,698 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46241 2023-07-12 19:17:40,699 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46241 2023-07-12 19:17:40,699 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46241 2023-07-12 19:17:40,701 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46241 2023-07-12 19:17:40,701 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46241 2023-07-12 19:17:40,703 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:40,704 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:40,704 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:40,704 INFO [Listener at localhost.localdomain/40989] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:40,704 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:40,705 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:40,705 INFO [Listener at localhost.localdomain/40989] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:40,705 INFO [Listener at localhost.localdomain/40989] http.HttpServer(1146): Jetty bound to port 42095 2023-07-12 19:17:40,706 INFO [Listener at localhost.localdomain/40989] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:40,711 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,711 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@21511886{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:40,712 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,712 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@153040b5{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:40,719 INFO [Listener at localhost.localdomain/40989] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:40,720 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:40,720 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:40,721 INFO [Listener at localhost.localdomain/40989] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-12 19:17:40,722 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:40,723 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@17f73ce5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:40,724 INFO [Listener at localhost.localdomain/40989] server.AbstractConnector(333): Started ServerConnector@32e52fef{HTTP/1.1, (http/1.1)}{0.0.0.0:42095} 2023-07-12 19:17:40,725 INFO [Listener at localhost.localdomain/40989] server.Server(415): Started @42697ms 2023-07-12 19:17:40,727 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:40,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1063457c{HTTP/1.1, (http/1.1)}{0.0.0.0:44109} 2023-07-12 19:17:40,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @42706ms 2023-07-12 19:17:40,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:40,734 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 19:17:40,735 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:40,735 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:40,735 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:40,735 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:40,735 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:40,737 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:40,738 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:40,740 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,33451,1689189460437 from backup master directory 2023-07-12 19:17:40,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:40,749 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:40,750 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-12 19:17:40,750 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:40,750 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:40,816 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/hbase.id with ID: 757dbf3b-f9a9-42a7-9302-a31bb86cea2b 2023-07-12 19:17:40,862 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:40,865 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:40,946 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x58259eb7 to 127.0.0.1:50438 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:40,954 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@346b403f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:40,954 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:40,955 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-12 19:17:40,958 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:40,961 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store-tmp 2023-07-12 19:17:40,978 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:40,978 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 19:17:40,978 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:40,978 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:40,978 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 19:17:40,978 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:40,978 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:40,978 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:40,979 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/WALs/jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:40,982 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33451%2C1689189460437, suffix=, logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/WALs/jenkins-hbase20.apache.org,33451,1689189460437, archiveDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/oldWALs, maxLogs=10 2023-07-12 19:17:41,004 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK] 2023-07-12 19:17:41,006 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK] 2023-07-12 19:17:41,006 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK] 2023-07-12 19:17:41,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/WALs/jenkins-hbase20.apache.org,33451,1689189460437/jenkins-hbase20.apache.org%2C33451%2C1689189460437.1689189460982 2023-07-12 19:17:41,014 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK], DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK], DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK]] 2023-07-12 19:17:41,014 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:41,015 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:41,015 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:41,015 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:41,019 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:41,021 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-12 19:17:41,021 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-12 19:17:41,022 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:41,023 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:41,024 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:41,027 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-12 19:17:41,031 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:41,032 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10333393920, jitterRate=-0.0376276969909668}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:41,032 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:41,032 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-12 19:17:41,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-12 19:17:41,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-12 19:17:41,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-12 19:17:41,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-12 19:17:41,034 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-12 19:17:41,035 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-12 19:17:41,039 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-12 19:17:41,041 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-12 19:17:41,041 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-12 19:17:41,042 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-12 19:17:41,042 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-12 19:17:41,043 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:41,043 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-12 19:17:41,044 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-12 19:17:41,044 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-12 19:17:41,045 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:41,045 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:41,045 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:41,045 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:41,045 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:41,045 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,33451,1689189460437, sessionid=0x100829e263a0000, setting cluster-up flag (Was=false) 2023-07-12 19:17:41,050 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:41,057 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-12 19:17:41,057 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:41,060 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:41,061 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-12 19:17:41,062 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:41,063 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.hbase-snapshot/.tmp 2023-07-12 19:17:41,063 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-12 19:17:41,063 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-12 19:17:41,064 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-12 19:17:41,064 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:41,065 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-12 19:17:41,065 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:41,079 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 19:17:41,079 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 19:17:41,079 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-12 19:17:41,079 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-12 19:17:41,079 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:41,079 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:41,079 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:41,080 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-12 19:17:41,080 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-12 19:17:41,083 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,084 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:41,084 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689189491088 2023-07-12 19:17:41,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-12 19:17:41,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-12 19:17:41,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-12 19:17:41,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-12 19:17:41,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-12 19:17:41,088 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-12 19:17:41,089 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,089 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:41,089 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-12 19:17:41,090 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-12 19:17:41,090 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-12 19:17:41,090 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-12 19:17:41,091 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:41,092 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-12 19:17:41,093 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-12 19:17:41,095 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189461094,5,FailOnTimeoutGroup] 2023-07-12 19:17:41,097 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189461095,5,FailOnTimeoutGroup] 2023-07-12 19:17:41,097 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,098 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-12 19:17:41,098 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,099 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,108 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:41,108 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:41,109 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96 2023-07-12 19:17:41,123 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:41,124 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 19:17:41,126 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/info 2023-07-12 19:17:41,126 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 19:17:41,127 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:41,127 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 19:17:41,130 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:41,130 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 19:17:41,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:41,131 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 19:17:41,133 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/table 2023-07-12 19:17:41,133 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 19:17:41,133 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:41,139 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(951): ClusterId : 757dbf3b-f9a9-42a7-9302-a31bb86cea2b 2023-07-12 19:17:41,139 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:41,139 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(951): ClusterId : 757dbf3b-f9a9-42a7-9302-a31bb86cea2b 2023-07-12 19:17:41,139 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:41,141 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:41,141 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:41,141 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:41,141 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:41,142 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:41,142 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:41,150 DEBUG [RS:1;jenkins-hbase20:33397] zookeeper.ReadOnlyZKClient(139): Connect 0x783acad3 to 127.0.0.1:50438 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:41,151 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740 2023-07-12 19:17:41,151 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740 2023-07-12 19:17:41,151 DEBUG [RS:0;jenkins-hbase20:38393] zookeeper.ReadOnlyZKClient(139): Connect 0x4776e5c8 to 127.0.0.1:50438 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:41,153 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(951): ClusterId : 757dbf3b-f9a9-42a7-9302-a31bb86cea2b 2023-07-12 19:17:41,153 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:41,154 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 19:17:41,167 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 19:17:41,172 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:41,173 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11665579680, jitterRate=0.086441770195961}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 19:17:41,173 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 19:17:41,173 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 19:17:41,173 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 19:17:41,173 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 19:17:41,173 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 19:17:41,173 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 19:17:41,174 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:41,174 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 19:17:41,175 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-12 19:17:41,175 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-12 19:17:41,175 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-12 19:17:41,179 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-12 19:17:41,181 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:41,182 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-12 19:17:41,182 DEBUG [RS:1;jenkins-hbase20:33397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@76fd2ba4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:41,182 DEBUG [RS:1;jenkins-hbase20:33397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72c2a1a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:41,182 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:41,184 DEBUG [RS:0;jenkins-hbase20:38393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f16f524, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:41,186 DEBUG [RS:0;jenkins-hbase20:38393] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@767ba342, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:41,214 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:41,217 DEBUG [RS:2;jenkins-hbase20:46241] zookeeper.ReadOnlyZKClient(139): Connect 0x736147b0 to 127.0.0.1:50438 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:41,235 DEBUG [RS:0;jenkins-hbase20:38393] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:38393 2023-07-12 19:17:41,235 INFO [RS:0;jenkins-hbase20:38393] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:41,235 INFO [RS:0;jenkins-hbase20:38393] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:41,235 DEBUG [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:41,239 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:33397 2023-07-12 19:17:41,242 INFO [RS:1;jenkins-hbase20:33397] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:41,242 INFO [RS:1;jenkins-hbase20:33397] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:41,242 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:41,242 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33451,1689189460437 with isa=jenkins-hbase20.apache.org/148.251.75.209:38393, startcode=1689189460532 2023-07-12 19:17:41,243 DEBUG [RS:0;jenkins-hbase20:38393] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:41,243 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33451,1689189460437 with isa=jenkins-hbase20.apache.org/148.251.75.209:33397, startcode=1689189460611 2023-07-12 19:17:41,243 DEBUG [RS:1;jenkins-hbase20:33397] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:41,276 DEBUG [RS:2;jenkins-hbase20:46241] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7da6b7ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:41,276 DEBUG [RS:2;jenkins-hbase20:46241] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79992cc0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:41,277 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36577, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:41,278 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:53795, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:41,285 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33451] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,285 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33451] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,285 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:41,286 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96 2023-07-12 19:17:41,286 DEBUG [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96 2023-07-12 19:17:41,286 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33609 2023-07-12 19:17:41,288 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-12 19:17:41,288 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37363 2023-07-12 19:17:41,286 DEBUG [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33609 2023-07-12 19:17:41,288 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:41,288 DEBUG [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37363 2023-07-12 19:17:41,291 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:41,291 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:46241 2023-07-12 19:17:41,292 INFO [RS:2;jenkins-hbase20:46241] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:41,292 INFO [RS:2;jenkins-hbase20:46241] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:41,292 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:41,300 DEBUG [RS:0;jenkins-hbase20:38393] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,301 WARN [RS:0;jenkins-hbase20:38393] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:41,301 INFO [RS:0;jenkins-hbase20:38393] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:41,301 DEBUG [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,302 DEBUG [RS:1;jenkins-hbase20:33397] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,302 WARN [RS:1;jenkins-hbase20:33397] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:41,302 INFO [RS:1;jenkins-hbase20:33397] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:41,302 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38393,1689189460532] 2023-07-12 19:17:41,302 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,302 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33451,1689189460437 with isa=jenkins-hbase20.apache.org/148.251.75.209:46241, startcode=1689189460679 2023-07-12 19:17:41,302 DEBUG [RS:2;jenkins-hbase20:46241] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:41,303 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,33397,1689189460611] 2023-07-12 19:17:41,304 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48647, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:41,305 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33451] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,305 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:41,305 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-12 19:17:41,306 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96 2023-07-12 19:17:41,306 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33609 2023-07-12 19:17:41,306 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37363 2023-07-12 19:17:41,307 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:41,308 DEBUG [RS:2;jenkins-hbase20:46241] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,308 WARN [RS:2;jenkins-hbase20:46241] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:41,308 INFO [RS:2;jenkins-hbase20:46241] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:41,308 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,316 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,46241,1689189460679] 2023-07-12 19:17:41,330 DEBUG [RS:1;jenkins-hbase20:33397] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,331 DEBUG [RS:0;jenkins-hbase20:38393] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,331 DEBUG [RS:2;jenkins-hbase20:46241] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,331 DEBUG [RS:1;jenkins-hbase20:33397] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,331 DEBUG [RS:0;jenkins-hbase20:38393] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,331 DEBUG [RS:2;jenkins-hbase20:46241] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,331 DEBUG [RS:1;jenkins-hbase20:33397] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,331 DEBUG [RS:0;jenkins-hbase20:38393] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,332 DEBUG [RS:2;jenkins-hbase20:46241] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,332 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:41,332 DEBUG [jenkins-hbase20:33451] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-12 19:17:41,332 DEBUG [RS:0;jenkins-hbase20:38393] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:41,332 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:41,332 INFO [RS:0;jenkins-hbase20:38393] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:41,333 INFO [RS:2;jenkins-hbase20:46241] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:41,332 INFO [RS:1;jenkins-hbase20:33397] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:41,332 DEBUG [jenkins-hbase20:33451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:41,334 DEBUG [jenkins-hbase20:33451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:41,334 DEBUG [jenkins-hbase20:33451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:41,334 DEBUG [jenkins-hbase20:33451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:41,334 DEBUG [jenkins-hbase20:33451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:41,335 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46241,1689189460679, state=OPENING 2023-07-12 19:17:41,335 INFO [RS:0;jenkins-hbase20:38393] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:41,336 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-12 19:17:41,336 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:41,337 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46241,1689189460679}] 2023-07-12 19:17:41,344 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 19:17:41,358 INFO [RS:2;jenkins-hbase20:46241] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:41,359 INFO [RS:0;jenkins-hbase20:38393] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:41,359 INFO [RS:0;jenkins-hbase20:38393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,363 INFO [RS:2;jenkins-hbase20:46241] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:41,363 INFO [RS:2;jenkins-hbase20:46241] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,364 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:41,365 INFO [RS:1;jenkins-hbase20:33397] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:41,367 INFO [RS:1;jenkins-hbase20:33397] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:41,367 INFO [RS:1;jenkins-hbase20:33397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,372 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:41,373 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:41,377 INFO [RS:2;jenkins-hbase20:46241] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,377 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,377 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,377 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,378 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,378 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,378 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:41,378 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,378 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,378 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,378 DEBUG [RS:2;jenkins-hbase20:46241] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,384 WARN [ReadOnlyZKClient-127.0.0.1:50438@0x58259eb7] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 19:17:41,384 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:41,396 INFO [RS:1;jenkins-hbase20:33397] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,396 INFO [RS:2;jenkins-hbase20:46241] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,396 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,396 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,396 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,397 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,397 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,397 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:41,397 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,397 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,397 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,397 DEBUG [RS:1;jenkins-hbase20:33397] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,396 INFO [RS:0;jenkins-hbase20:38393] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,396 INFO [RS:2;jenkins-hbase20:46241] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,397 INFO [RS:2;jenkins-hbase20:46241] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,399 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,399 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,399 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60700, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,400 DEBUG [RS:0;jenkins-hbase20:38393] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:41,400 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46241] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server jenkins-hbase20.apache.org,46241,1689189460679 is not running yet at org.apache.hadoop.hbase.regionserver.RSRpcServices.checkOpen(RSRpcServices.java:1533) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2513) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:41,408 INFO [RS:0;jenkins-hbase20:38393] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,408 INFO [RS:0;jenkins-hbase20:38393] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,408 INFO [RS:0;jenkins-hbase20:38393] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,414 INFO [RS:1;jenkins-hbase20:33397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,414 INFO [RS:1;jenkins-hbase20:33397] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,414 INFO [RS:1;jenkins-hbase20:33397] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,431 INFO [RS:2;jenkins-hbase20:46241] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:41,432 INFO [RS:2;jenkins-hbase20:46241] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46241,1689189460679-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,439 INFO [RS:0;jenkins-hbase20:38393] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:41,439 INFO [RS:0;jenkins-hbase20:38393] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38393,1689189460532-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,440 INFO [RS:1;jenkins-hbase20:33397] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:41,441 INFO [RS:1;jenkins-hbase20:33397] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33397,1689189460611-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,448 INFO [RS:2;jenkins-hbase20:46241] regionserver.Replication(203): jenkins-hbase20.apache.org,46241,1689189460679 started 2023-07-12 19:17:41,448 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,46241,1689189460679, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:46241, sessionid=0x100829e263a0003 2023-07-12 19:17:41,448 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:41,448 DEBUG [RS:2;jenkins-hbase20:46241] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,448 DEBUG [RS:2;jenkins-hbase20:46241] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46241,1689189460679' 2023-07-12 19:17:41,448 DEBUG [RS:2;jenkins-hbase20:46241] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:41,449 DEBUG [RS:2;jenkins-hbase20:46241] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:41,449 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:41,449 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:41,449 DEBUG [RS:2;jenkins-hbase20:46241] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,449 DEBUG [RS:2;jenkins-hbase20:46241] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46241,1689189460679' 2023-07-12 19:17:41,449 DEBUG [RS:2;jenkins-hbase20:46241] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:41,450 DEBUG [RS:2;jenkins-hbase20:46241] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:41,450 DEBUG [RS:2;jenkins-hbase20:46241] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:41,450 INFO [RS:2;jenkins-hbase20:46241] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:41,450 INFO [RS:2;jenkins-hbase20:46241] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:41,454 INFO [RS:1;jenkins-hbase20:33397] regionserver.Replication(203): jenkins-hbase20.apache.org,33397,1689189460611 started 2023-07-12 19:17:41,454 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,33397,1689189460611, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:33397, sessionid=0x100829e263a0002 2023-07-12 19:17:41,458 INFO [RS:0;jenkins-hbase20:38393] regionserver.Replication(203): jenkins-hbase20.apache.org,38393,1689189460532 started 2023-07-12 19:17:41,458 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:41,458 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38393,1689189460532, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38393, sessionid=0x100829e263a0001 2023-07-12 19:17:41,458 DEBUG [RS:1;jenkins-hbase20:33397] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,459 DEBUG [RS:1;jenkins-hbase20:33397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33397,1689189460611' 2023-07-12 19:17:41,459 DEBUG [RS:1;jenkins-hbase20:33397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:41,459 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:41,459 DEBUG [RS:0;jenkins-hbase20:38393] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,459 DEBUG [RS:0;jenkins-hbase20:38393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38393,1689189460532' 2023-07-12 19:17:41,459 DEBUG [RS:0;jenkins-hbase20:38393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:41,459 DEBUG [RS:1;jenkins-hbase20:33397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:41,459 DEBUG [RS:0;jenkins-hbase20:38393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:41,459 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:41,460 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:41,460 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:41,460 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:41,460 DEBUG [RS:1;jenkins-hbase20:33397] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,460 DEBUG [RS:0;jenkins-hbase20:38393] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:41,460 DEBUG [RS:0;jenkins-hbase20:38393] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38393,1689189460532' 2023-07-12 19:17:41,460 DEBUG [RS:0;jenkins-hbase20:38393] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:41,460 DEBUG [RS:1;jenkins-hbase20:33397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,33397,1689189460611' 2023-07-12 19:17:41,460 DEBUG [RS:1;jenkins-hbase20:33397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:41,460 DEBUG [RS:0;jenkins-hbase20:38393] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:41,460 DEBUG [RS:1;jenkins-hbase20:33397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:41,460 DEBUG [RS:0;jenkins-hbase20:38393] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:41,460 INFO [RS:0;jenkins-hbase20:38393] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:41,460 INFO [RS:0;jenkins-hbase20:38393] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:41,460 DEBUG [RS:1;jenkins-hbase20:33397] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:41,461 INFO [RS:1;jenkins-hbase20:33397] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:41,461 INFO [RS:1;jenkins-hbase20:33397] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:41,534 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:41,537 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:41,541 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60706, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:41,546 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-12 19:17:41,546 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:41,548 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46241%2C1689189460679.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,46241,1689189460679, archiveDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs, maxLogs=32 2023-07-12 19:17:41,552 INFO [RS:2;jenkins-hbase20:46241] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46241%2C1689189460679, suffix=, logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,46241,1689189460679, archiveDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs, maxLogs=32 2023-07-12 19:17:41,568 INFO [RS:0;jenkins-hbase20:38393] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38393%2C1689189460532, suffix=, logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,38393,1689189460532, archiveDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs, maxLogs=32 2023-07-12 19:17:41,569 INFO [RS:1;jenkins-hbase20:33397] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33397%2C1689189460611, suffix=, logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,33397,1689189460611, archiveDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs, maxLogs=32 2023-07-12 19:17:41,587 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK] 2023-07-12 19:17:41,587 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK] 2023-07-12 19:17:41,592 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK] 2023-07-12 19:17:41,598 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK] 2023-07-12 19:17:41,599 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK] 2023-07-12 19:17:41,625 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK] 2023-07-12 19:17:41,636 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK] 2023-07-12 19:17:41,636 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK] 2023-07-12 19:17:41,636 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK] 2023-07-12 19:17:41,639 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,46241,1689189460679/jenkins-hbase20.apache.org%2C46241%2C1689189460679.meta.1689189461549.meta 2023-07-12 19:17:41,642 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK], DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK], DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK]] 2023-07-12 19:17:41,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:41,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 19:17:41,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-12 19:17:41,643 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-12 19:17:41,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-12 19:17:41,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:41,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-12 19:17:41,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-12 19:17:41,655 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-12 19:17:41,662 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK] 2023-07-12 19:17:41,663 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK] 2023-07-12 19:17:41,663 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK] 2023-07-12 19:17:41,663 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/info 2023-07-12 19:17:41,664 INFO [RS:0;jenkins-hbase20:38393] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,38393,1689189460532/jenkins-hbase20.apache.org%2C38393%2C1689189460532.1689189461569 2023-07-12 19:17:41,663 INFO [RS:2;jenkins-hbase20:46241] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,46241,1689189460679/jenkins-hbase20.apache.org%2C46241%2C1689189460679.1689189461553 2023-07-12 19:17:41,666 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/info 2023-07-12 19:17:41,667 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-12 19:17:41,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:41,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-12 19:17:41,669 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:41,669 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/rep_barrier 2023-07-12 19:17:41,669 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-12 19:17:41,670 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:41,670 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-12 19:17:41,671 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/table 2023-07-12 19:17:41,671 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/table 2023-07-12 19:17:41,671 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-12 19:17:41,672 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:41,675 DEBUG [RS:2;jenkins-hbase20:46241] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK], DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK], DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK]] 2023-07-12 19:17:41,681 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740 2023-07-12 19:17:41,682 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740 2023-07-12 19:17:41,686 DEBUG [RS:0;jenkins-hbase20:38393] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK], DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK], DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK]] 2023-07-12 19:17:41,687 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-12 19:17:41,689 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-12 19:17:41,691 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9428454560, jitterRate=-0.12190674245357513}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-12 19:17:41,691 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-12 19:17:41,691 INFO [RS:1;jenkins-hbase20:33397] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,33397,1689189460611/jenkins-hbase20.apache.org%2C33397%2C1689189460611.1689189461570 2023-07-12 19:17:41,692 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689189461534 2023-07-12 19:17:41,695 DEBUG [RS:1;jenkins-hbase20:33397] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK], DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK], DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK]] 2023-07-12 19:17:41,705 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-12 19:17:41,705 WARN [ReadOnlyZKClient-127.0.0.1:50438@0x58259eb7] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-12 19:17:41,706 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-12 19:17:41,707 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,46241,1689189460679, state=OPEN 2023-07-12 19:17:41,710 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:41,712 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-12 19:17:41,712 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-12 19:17:41,713 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 19:17:41,715 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-12 19:17:41,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-12 19:17:41,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,46241,1689189460679 in 375 msec 2023-07-12 19:17:41,718 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-12 19:17:41,718 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 542 msec 2023-07-12 19:17:41,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 653 msec 2023-07-12 19:17:41,719 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689189461719, completionTime=-1 2023-07-12 19:17:41,720 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-12 19:17:41,720 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-12 19:17:41,726 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-12 19:17:41,726 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689189521726 2023-07-12 19:17:41,726 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689189581726 2023-07-12 19:17:41,726 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-12 19:17:41,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33451,1689189460437-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33451,1689189460437-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33451,1689189460437-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:33451, period=300000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:41,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-12 19:17:41,733 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:41,733 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:41,734 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-12 19:17:41,736 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-12 19:17:41,736 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:41,737 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:41,738 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:41,738 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:41,739 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740 empty. 2023-07-12 19:17:41,740 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:41,740 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-12 19:17:41,740 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:41,743 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef empty. 2023-07-12 19:17:41,744 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:41,744 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-12 19:17:41,773 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:41,773 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:41,774 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 937a51bce1914656b47d8675dd63a3ef, NAME => 'hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp 2023-07-12 19:17:41,779 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => cbc17255d82e0ee87a232158f33f4740, NAME => 'hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 937a51bce1914656b47d8675dd63a3ef, disabling compactions & flushes 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing cbc17255d82e0ee87a232158f33f4740, disabling compactions & flushes 2023-07-12 19:17:41,803 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:41,803 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. after waiting 0 ms 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:41,803 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. after waiting 0 ms 2023-07-12 19:17:41,803 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:41,804 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:41,804 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for cbc17255d82e0ee87a232158f33f4740: 2023-07-12 19:17:41,804 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:41,804 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 937a51bce1914656b47d8675dd63a3ef: 2023-07-12 19:17:41,806 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:41,807 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:41,807 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189461807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189461807"}]},"ts":"1689189461807"} 2023-07-12 19:17:41,809 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189461809"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189461809"}]},"ts":"1689189461809"} 2023-07-12 19:17:41,811 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:41,812 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:41,812 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189461812"}]},"ts":"1689189461812"} 2023-07-12 19:17:41,812 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:41,814 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:41,814 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189461814"}]},"ts":"1689189461814"} 2023-07-12 19:17:41,814 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-12 19:17:41,817 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:41,818 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:41,818 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:41,818 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:41,818 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:41,818 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=cbc17255d82e0ee87a232158f33f4740, ASSIGN}] 2023-07-12 19:17:41,821 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=cbc17255d82e0ee87a232158f33f4740, ASSIGN 2023-07-12 19:17:41,822 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-12 19:17:41,823 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=cbc17255d82e0ee87a232158f33f4740, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,33397,1689189460611; forceNewPlan=false, retain=false 2023-07-12 19:17:41,834 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:41,834 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:41,834 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:41,834 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:41,834 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:41,835 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=937a51bce1914656b47d8675dd63a3ef, ASSIGN}] 2023-07-12 19:17:41,839 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=937a51bce1914656b47d8675dd63a3ef, ASSIGN 2023-07-12 19:17:41,841 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=937a51bce1914656b47d8675dd63a3ef, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,33397,1689189460611; forceNewPlan=false, retain=false 2023-07-12 19:17:41,841 INFO [jenkins-hbase20:33451] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-12 19:17:41,843 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=cbc17255d82e0ee87a232158f33f4740, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,844 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189461843"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189461843"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189461843"}]},"ts":"1689189461843"} 2023-07-12 19:17:41,845 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=937a51bce1914656b47d8675dd63a3ef, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,845 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189461845"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189461845"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189461845"}]},"ts":"1689189461845"} 2023-07-12 19:17:41,846 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure cbc17255d82e0ee87a232158f33f4740, server=jenkins-hbase20.apache.org,33397,1689189460611}] 2023-07-12 19:17:41,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 937a51bce1914656b47d8675dd63a3ef, server=jenkins-hbase20.apache.org,33397,1689189460611}] 2023-07-12 19:17:41,999 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:41,999 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:42,001 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34296, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:42,005 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:42,005 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 937a51bce1914656b47d8675dd63a3ef, NAME => 'hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:42,005 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:42,005 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:42,005 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:42,006 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:42,007 INFO [StoreOpener-937a51bce1914656b47d8675dd63a3ef-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:42,008 DEBUG [StoreOpener-937a51bce1914656b47d8675dd63a3ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/info 2023-07-12 19:17:42,008 DEBUG [StoreOpener-937a51bce1914656b47d8675dd63a3ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/info 2023-07-12 19:17:42,009 INFO [StoreOpener-937a51bce1914656b47d8675dd63a3ef-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 937a51bce1914656b47d8675dd63a3ef columnFamilyName info 2023-07-12 19:17:42,009 INFO [StoreOpener-937a51bce1914656b47d8675dd63a3ef-1] regionserver.HStore(310): Store=937a51bce1914656b47d8675dd63a3ef/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:42,010 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:42,010 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:42,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:42,014 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:42,015 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 937a51bce1914656b47d8675dd63a3ef; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10971373760, jitterRate=0.021788805723190308}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:42,015 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 937a51bce1914656b47d8675dd63a3ef: 2023-07-12 19:17:42,016 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef., pid=9, masterSystemTime=1689189461999 2023-07-12 19:17:42,020 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:42,021 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:42,021 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:42,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cbc17255d82e0ee87a232158f33f4740, NAME => 'hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:42,022 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=937a51bce1914656b47d8675dd63a3ef, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:42,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-12 19:17:42,022 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689189462022"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189462022"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189462022"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189462022"}]},"ts":"1689189462022"} 2023-07-12 19:17:42,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. service=MultiRowMutationService 2023-07-12 19:17:42,022 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-12 19:17:42,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:42,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:42,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:42,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:42,024 INFO [StoreOpener-cbc17255d82e0ee87a232158f33f4740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:42,025 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-12 19:17:42,025 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 937a51bce1914656b47d8675dd63a3ef, server=jenkins-hbase20.apache.org,33397,1689189460611 in 178 msec 2023-07-12 19:17:42,025 DEBUG [StoreOpener-cbc17255d82e0ee87a232158f33f4740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/m 2023-07-12 19:17:42,026 DEBUG [StoreOpener-cbc17255d82e0ee87a232158f33f4740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/m 2023-07-12 19:17:42,026 INFO [StoreOpener-cbc17255d82e0ee87a232158f33f4740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cbc17255d82e0ee87a232158f33f4740 columnFamilyName m 2023-07-12 19:17:42,026 INFO [StoreOpener-cbc17255d82e0ee87a232158f33f4740-1] regionserver.HStore(310): Store=cbc17255d82e0ee87a232158f33f4740/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:42,027 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-12 19:17:42,027 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=937a51bce1914656b47d8675dd63a3ef, ASSIGN in 190 msec 2023-07-12 19:17:42,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:42,028 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:42,028 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189462028"}]},"ts":"1689189462028"} 2023-07-12 19:17:42,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:42,029 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-12 19:17:42,030 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:42,031 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:42,033 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:42,033 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 298 msec 2023-07-12 19:17:42,033 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened cbc17255d82e0ee87a232158f33f4740; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@40900177, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:42,033 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for cbc17255d82e0ee87a232158f33f4740: 2023-07-12 19:17:42,034 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740., pid=8, masterSystemTime=1689189461999 2023-07-12 19:17:42,035 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-12 19:17:42,035 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:42,035 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:42,036 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=cbc17255d82e0ee87a232158f33f4740, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:42,036 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689189462036"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189462036"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189462036"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189462036"}]},"ts":"1689189462036"} 2023-07-12 19:17:42,041 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:42,041 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:42,043 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-12 19:17:42,043 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure cbc17255d82e0ee87a232158f33f4740, server=jenkins-hbase20.apache.org,33397,1689189460611 in 191 msec 2023-07-12 19:17:42,046 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:42,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-12 19:17:42,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=cbc17255d82e0ee87a232158f33f4740, ASSIGN in 225 msec 2023-07-12 19:17:42,047 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34308, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:42,048 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:42,048 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189462048"}]},"ts":"1689189462048"} 2023-07-12 19:17:42,050 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-12 19:17:42,051 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-12 19:17:42,054 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:42,055 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 344 msec 2023-07-12 19:17:42,066 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:42,069 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 19 msec 2023-07-12 19:17:42,072 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-12 19:17:42,083 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:42,094 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-07-12 19:17:42,108 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-12 19:17:42,110 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-12 19:17:42,110 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.360sec 2023-07-12 19:17:42,110 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-12 19:17:42,110 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-12 19:17:42,110 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-12 19:17:42,110 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33451,1689189460437-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-12 19:17:42,110 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33451,1689189460437-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-12 19:17:42,118 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-12 19:17:42,121 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-12 19:17:42,121 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-12 19:17:42,125 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:42,125 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:42,126 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 19:17:42,126 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-12 19:17:42,152 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ReadOnlyZKClient(139): Connect 0x7a4678cd to 127.0.0.1:50438 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:42,157 DEBUG [Listener at localhost.localdomain/40989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25ea19ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:42,163 DEBUG [hconnection-0xbd409e1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:42,168 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60714, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:42,170 INFO [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:42,170 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:42,177 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-12 19:17:42,179 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45826, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-12 19:17:42,181 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-12 19:17:42,181 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:42,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-12 19:17:42,186 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ReadOnlyZKClient(139): Connect 0x48b40407 to 127.0.0.1:50438 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:42,198 DEBUG [Listener at localhost.localdomain/40989] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@630487f1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:42,198 INFO [Listener at localhost.localdomain/40989] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50438 2023-07-12 19:17:42,210 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:42,211 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x100829e263a000a connected 2023-07-12 19:17:42,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:42,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:42,220 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-12 19:17:42,230 INFO [Listener at localhost.localdomain/40989] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-12 19:17:42,231 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:42,231 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:42,231 INFO [Listener at localhost.localdomain/40989] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-12 19:17:42,231 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-12 19:17:42,231 INFO [Listener at localhost.localdomain/40989] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-12 19:17:42,231 INFO [Listener at localhost.localdomain/40989] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-12 19:17:42,232 INFO [Listener at localhost.localdomain/40989] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37939 2023-07-12 19:17:42,232 INFO [Listener at localhost.localdomain/40989] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-12 19:17:42,234 DEBUG [Listener at localhost.localdomain/40989] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-12 19:17:42,234 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:42,236 INFO [Listener at localhost.localdomain/40989] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-12 19:17:42,237 INFO [Listener at localhost.localdomain/40989] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37939 connecting to ZooKeeper ensemble=127.0.0.1:50438 2023-07-12 19:17:42,243 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(162): regionserver:379390x0, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-12 19:17:42,243 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:379390x0, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-12 19:17:42,247 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37939-0x100829e263a000b connected 2023-07-12 19:17:42,248 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(162): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-12 19:17:42,248 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ZKUtil(164): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-12 19:17:42,253 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37939 2023-07-12 19:17:42,255 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37939 2023-07-12 19:17:42,270 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37939 2023-07-12 19:17:42,274 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37939 2023-07-12 19:17:42,275 DEBUG [Listener at localhost.localdomain/40989] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37939 2023-07-12 19:17:42,277 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-12 19:17:42,277 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-12 19:17:42,278 INFO [Listener at localhost.localdomain/40989] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-12 19:17:42,278 INFO [Listener at localhost.localdomain/40989] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-12 19:17:42,279 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-12 19:17:42,279 INFO [Listener at localhost.localdomain/40989] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-12 19:17:42,279 INFO [Listener at localhost.localdomain/40989] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-12 19:17:42,280 INFO [Listener at localhost.localdomain/40989] http.HttpServer(1146): Jetty bound to port 38729 2023-07-12 19:17:42,280 INFO [Listener at localhost.localdomain/40989] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-12 19:17:42,291 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:42,291 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41783fd8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,AVAILABLE} 2023-07-12 19:17:42,292 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:42,292 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3df20a58{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-12 19:17:42,299 INFO [Listener at localhost.localdomain/40989] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-12 19:17:42,300 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-12 19:17:42,300 INFO [Listener at localhost.localdomain/40989] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-12 19:17:42,301 INFO [Listener at localhost.localdomain/40989] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-12 19:17:42,307 INFO [Listener at localhost.localdomain/40989] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-12 19:17:42,308 INFO [Listener at localhost.localdomain/40989] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7a7e30e8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:42,312 INFO [Listener at localhost.localdomain/40989] server.AbstractConnector(333): Started ServerConnector@1dd9ac5b{HTTP/1.1, (http/1.1)}{0.0.0.0:38729} 2023-07-12 19:17:42,312 INFO [Listener at localhost.localdomain/40989] server.Server(415): Started @44285ms 2023-07-12 19:17:42,319 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(951): ClusterId : 757dbf3b-f9a9-42a7-9302-a31bb86cea2b 2023-07-12 19:17:42,319 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-12 19:17:42,321 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-12 19:17:42,321 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-12 19:17:42,323 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-12 19:17:42,330 DEBUG [RS:3;jenkins-hbase20:37939] zookeeper.ReadOnlyZKClient(139): Connect 0x5cc30725 to 127.0.0.1:50438 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-12 19:17:42,344 DEBUG [RS:3;jenkins-hbase20:37939] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5204f2cd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-12 19:17:42,344 DEBUG [RS:3;jenkins-hbase20:37939] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@563b6c55, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:42,355 DEBUG [RS:3;jenkins-hbase20:37939] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:37939 2023-07-12 19:17:42,355 INFO [RS:3;jenkins-hbase20:37939] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-12 19:17:42,356 INFO [RS:3;jenkins-hbase20:37939] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-12 19:17:42,356 DEBUG [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1022): About to register with Master. 2023-07-12 19:17:42,356 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33451,1689189460437 with isa=jenkins-hbase20.apache.org/148.251.75.209:37939, startcode=1689189462230 2023-07-12 19:17:42,356 DEBUG [RS:3;jenkins-hbase20:37939] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-12 19:17:42,365 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:41991, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-12 19:17:42,366 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33451] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,366 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-12 19:17:42,366 DEBUG [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96 2023-07-12 19:17:42,366 DEBUG [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:33609 2023-07-12 19:17:42,366 DEBUG [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37363 2023-07-12 19:17:42,369 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:42,369 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:42,370 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:42,369 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:42,369 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:42,370 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:42,370 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-12 19:17:42,370 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,37939,1689189462230] 2023-07-12 19:17:42,371 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,371 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:42,371 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-12 19:17:42,372 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:42,375 DEBUG [RS:3;jenkins-hbase20:37939] zookeeper.ZKUtil(162): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,375 WARN [RS:3;jenkins-hbase20:37939] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-12 19:17:42,375 INFO [RS:3;jenkins-hbase20:37939] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-12 19:17:42,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:42,375 DEBUG [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:42,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:42,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:42,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:42,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:42,386 DEBUG [RS:3;jenkins-hbase20:37939] zookeeper.ZKUtil(162): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:42,386 DEBUG [RS:3;jenkins-hbase20:37939] zookeeper.ZKUtil(162): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,387 DEBUG [RS:3;jenkins-hbase20:37939] zookeeper.ZKUtil(162): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:42,387 DEBUG [RS:3;jenkins-hbase20:37939] zookeeper.ZKUtil(162): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:42,388 DEBUG [RS:3;jenkins-hbase20:37939] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-12 19:17:42,388 INFO [RS:3;jenkins-hbase20:37939] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-12 19:17:42,389 INFO [RS:3;jenkins-hbase20:37939] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-12 19:17:42,390 INFO [RS:3;jenkins-hbase20:37939] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-12 19:17:42,391 INFO [RS:3;jenkins-hbase20:37939] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:42,394 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-12 19:17:42,398 INFO [RS:3;jenkins-hbase20:37939] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:42,398 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,398 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,398 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,398 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,398 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,398 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-12 19:17:42,398 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,399 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,399 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,399 DEBUG [RS:3;jenkins-hbase20:37939] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-12 19:17:42,409 INFO [RS:3;jenkins-hbase20:37939] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:42,409 INFO [RS:3;jenkins-hbase20:37939] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:42,411 INFO [RS:3;jenkins-hbase20:37939] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:42,428 INFO [RS:3;jenkins-hbase20:37939] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-12 19:17:42,428 INFO [RS:3;jenkins-hbase20:37939] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37939,1689189462230-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-12 19:17:42,450 INFO [RS:3;jenkins-hbase20:37939] regionserver.Replication(203): jenkins-hbase20.apache.org,37939,1689189462230 started 2023-07-12 19:17:42,451 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,37939,1689189462230, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:37939, sessionid=0x100829e263a000b 2023-07-12 19:17:42,451 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-12 19:17:42,451 DEBUG [RS:3;jenkins-hbase20:37939] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,451 DEBUG [RS:3;jenkins-hbase20:37939] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37939,1689189462230' 2023-07-12 19:17:42,451 DEBUG [RS:3;jenkins-hbase20:37939] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-12 19:17:42,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:42,453 DEBUG [RS:3;jenkins-hbase20:37939] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-12 19:17:42,454 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-12 19:17:42,454 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-12 19:17:42,454 DEBUG [RS:3;jenkins-hbase20:37939] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:42,454 DEBUG [RS:3;jenkins-hbase20:37939] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37939,1689189462230' 2023-07-12 19:17:42,455 DEBUG [RS:3;jenkins-hbase20:37939] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-12 19:17:42,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:42,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:42,455 DEBUG [RS:3;jenkins-hbase20:37939] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-12 19:17:42,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:42,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:42,461 DEBUG [RS:3;jenkins-hbase20:37939] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-12 19:17:42,461 INFO [RS:3;jenkins-hbase20:37939] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-12 19:17:42,461 INFO [RS:3;jenkins-hbase20:37939] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-12 19:17:42,463 DEBUG [hconnection-0x6978ce1d-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:42,465 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60722, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:42,473 DEBUG [hconnection-0x6978ce1d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-12 19:17:42,475 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:34320, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-12 19:17:42,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:42,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:42,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:42,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:42,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:45826 deadline: 1689190662482, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:42,483 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:42,486 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:42,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:42,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:42,488 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:42,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:42,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:42,563 INFO [RS:3;jenkins-hbase20:37939] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37939%2C1689189462230, suffix=, logDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,37939,1689189462230, archiveDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs, maxLogs=32 2023-07-12 19:17:42,569 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=560 (was 504) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/37875-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:38007 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7859d467-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-339214682_17 at /127.0.0.1:33724 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,40539,1689189455229 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51847@0x79950ffb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp649903440-2531 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:33609 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 811077746@qtp-90929711-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35929 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-339214682_17 at /127.0.0.1:33738 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-536-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-581469734_17 at /127.0.0.1:56616 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x96fd26c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:38007 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1186488990-2263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5a4de91b sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x96fd26c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp57656221-2228-acceptor-0@dfbda5f-ServerConnector@4d47d773{HTTP/1.1, (http/1.1)}{0.0.0.0:42455} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40989-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost.localdomain/40989-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1913676858_17 at /127.0.0.1:56566 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:33609 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-6de438b1-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7f3445f9[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36233 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@353f3178 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40989 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 626289963@qtp-1203446360-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36671 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x96fd26c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase20:46241-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/40989.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Listener at localhost.localdomain/40989-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp514107608-2272-acceptor-0@625c8c2-ServerConnector@1063457c{HTTP/1.1, (http/1.1)}{0.0.0.0:44109} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp964397613-2199 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1186488990-2257 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189461095 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Server handler 2 on default port 35741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x96fd26c-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp57656221-2227 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x783acad3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 371388052@qtp-1071661542-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44961 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-581469734_17 at /127.0.0.1:33648 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:3;jenkins-hbase20:37939-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp514107608-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp514107608-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x5cc30725-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1813304279-2173 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@69249e24 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6978ce1d-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/40989.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-339214682_17 at /127.0.0.1:56584 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:33609 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:0;jenkins-hbase20:38393 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1403380388@qtp-2046780455-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:43799 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x7a4678cd sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp514107608-2271 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40989-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData-prefix:jenkins-hbase20.apache.org,33451,1689189460437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:33609 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:33609 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40989-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x58259eb7-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 33609 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@1628c239 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data3/current/BP-1420102587-148.251.75.209-1689189459628 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 910669672@qtp-90929711-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 33609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp57656221-2230 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35741 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/40989-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189461094 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 35741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x4776e5c8-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost.localdomain/40989.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@317c263e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-c2753a3-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40989-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 36233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-48277884_17 at /127.0.0.1:56608 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:37939Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1913676858_17 at /127.0.0.1:33698 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x5cc30725-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-48277884_17 at /127.0.0.1:43070 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1813304279-2169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@520edd85 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp514107608-2270 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1004514911@qtp-2046780455-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS:1;jenkins-hbase20:33397 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data6/current/BP-1420102587-148.251.75.209-1689189459628 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1186488990-2262 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1813304279-2166 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x58259eb7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@720c1265 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/40989-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-581469734_17 at /127.0.0.1:33768 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x736147b0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51847@0x79950ffb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x96fd26c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp649903440-2535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-581469734_17 at /127.0.0.1:43086 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96-prefix:jenkins-hbase20.apache.org,46241,1689189460679.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@3bd8deca[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data1/current/BP-1420102587-148.251.75.209-1689189459628 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:38007 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp649903440-2536 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp964397613-2203 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x4776e5c8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp57656221-2232 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp964397613-2198-acceptor-0@21871c7a-ServerConnector@181666e{HTTP/1.1, (http/1.1)}{0.0.0.0:45991} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:38007 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 2 on default port 36233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost.localdomain/40989-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: CacheReplicationMonitor(981863272) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x783acad3-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 164384830@qtp-1071661542-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data5/current/BP-1420102587-148.251.75.209-1689189459628 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33451,1689189460437 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 36233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase20:38393Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/37875-SendThread(127.0.0.1:51847) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@21ac3902 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp57656221-2233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 36233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp514107608-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1813304279-2168 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1813304279-2171 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp57656221-2231 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96-prefix:jenkins-hbase20.apache.org,33397,1689189460611 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x48b40407-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1913676858_17 at /127.0.0.1:43020 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:38007 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp964397613-2202 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-339214682_17 at /127.0.0.1:56598 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6978ce1d-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:33609 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96-prefix:jenkins-hbase20.apache.org,38393,1689189460532 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase20:46241 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x5cc30725 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@12c712a8 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40989 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1813304279-2167-acceptor-0@20a3c930-ServerConnector@48b9cd26{HTTP/1.1, (http/1.1)}{0.0.0.0:37363} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@1918b660 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:37939 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1813304279-2172 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x96fd26c-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-48277884_17 at /127.0.0.1:33752 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp57656221-2234 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp964397613-2204 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp964397613-2201 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x7a4678cd-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x7a4678cd-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp649903440-2538 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40989-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@5ce804e3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xbd409e1-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp649903440-2533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp964397613-2197 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51847@0x79950ffb-SendThread(127.0.0.1:51847) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x58259eb7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1913676858_17 at /127.0.0.1:56538 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96-prefix:jenkins-hbase20.apache.org,46241,1689189460679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:38007 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp514107608-2268 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@2239d854[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data2/current/BP-1420102587-148.251.75.209-1689189459628 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:38007 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1087f608-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:50438): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: IPC Server handler 3 on default port 33609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase20:33397Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2123392439@qtp-1203446360-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/40989-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1186488990-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x96fd26c-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp964397613-2200 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x48b40407 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/29342099.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:33609 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x736147b0-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1186488990-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp649903440-2532-acceptor-0@64935358-ServerConnector@1dd9ac5b{HTTP/1.1, (http/1.1)}{0.0.0.0:38729} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1186488990-2264 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x736147b0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x4776e5c8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@62e035ca java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 33609 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost.localdomain/40989-SendThread(127.0.0.1:50438) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp514107608-2269 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1800130233.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase20:33397-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x48b40407-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1186488990-2261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp649903440-2534 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp57656221-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp649903440-2537 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:46241Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-aa63f57-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:50438 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-538-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/40989.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp1813304279-2170 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:33609 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1186488990-2258-acceptor-0@5a78bae4-ServerConnector@32e52fef{HTTP/1.1, (http/1.1)}{0.0.0.0:42095} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:38007 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 40989 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50438@0x783acad3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data4/current/BP-1420102587-148.251.75.209-1689189459628 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (715240405) connection to localhost.localdomain/127.0.0.1:38007 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7c704d12 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-339214682_17 at /127.0.0.1:43050 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:33609 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-339214682_17 at /127.0.0.1:43060 [Receiving block BP-1420102587-148.251.75.209-1689189459628:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x96fd26c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37939 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1420102587-148.251.75.209-1689189459628:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase20:33451 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:33609 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase20:38393-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=839 (was 767) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=511 (was 494) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=3428 (was 3869) 2023-07-12 19:17:42,572 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-12 19:17:42,596 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=560, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=511, ProcessCount=170, AvailableMemoryMB=3417 2023-07-12 19:17:42,596 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-12 19:17:42,596 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-12 19:17:42,602 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK] 2023-07-12 19:17:42,604 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK] 2023-07-12 19:17:42,604 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK] 2023-07-12 19:17:42,617 INFO [RS:3;jenkins-hbase20:37939] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/WALs/jenkins-hbase20.apache.org,37939,1689189462230/jenkins-hbase20.apache.org%2C37939%2C1689189462230.1689189462564 2023-07-12 19:17:42,622 DEBUG [RS:3;jenkins-hbase20:37939] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42583,DS-b88c3966-5697-4dc1-92ea-862ca1b952ab,DISK], DatanodeInfoWithStorage[127.0.0.1:43191,DS-5ffdd0de-8eb1-4fb9-b872-73b0848bc5e2,DISK], DatanodeInfoWithStorage[127.0.0.1:42401,DS-de90d26e-4113-4189-9ccb-c295550dc9c5,DISK]] 2023-07-12 19:17:42,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:42,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:42,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:42,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:42,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:42,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:42,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:42,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:42,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:42,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:42,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:42,634 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:42,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:42,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:42,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:42,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:42,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:42,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:42,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:42,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:42,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:42,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:45826 deadline: 1689190662649, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:42,650 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:42,651 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:42,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:42,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:42,653 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:42,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:42,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:42,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:42,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 19:17:42,658 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:42,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-12 19:17:42,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 19:17:42,661 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:42,661 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:42,661 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:42,663 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-12 19:17:42,664 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362 2023-07-12 19:17:42,665 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362 empty. 2023-07-12 19:17:42,665 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362 2023-07-12 19:17:42,665 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 19:17:42,680 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-12 19:17:42,682 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 72bd882f068d3ec881af007402997362, NAME => 't1,,1689189462654.72bd882f068d3ec881af007402997362.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp 2023-07-12 19:17:42,695 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689189462654.72bd882f068d3ec881af007402997362.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:42,695 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 72bd882f068d3ec881af007402997362, disabling compactions & flushes 2023-07-12 19:17:42,695 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:42,696 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:42,696 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689189462654.72bd882f068d3ec881af007402997362. after waiting 0 ms 2023-07-12 19:17:42,696 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:42,696 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:42,696 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 72bd882f068d3ec881af007402997362: 2023-07-12 19:17:42,698 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-12 19:17:42,699 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689189462654.72bd882f068d3ec881af007402997362.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189462699"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189462699"}]},"ts":"1689189462699"} 2023-07-12 19:17:42,700 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-12 19:17:42,701 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-12 19:17:42,701 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189462701"}]},"ts":"1689189462701"} 2023-07-12 19:17:42,702 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-12 19:17:42,705 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-12 19:17:42,705 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-12 19:17:42,705 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-12 19:17:42,705 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-12 19:17:42,705 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-12 19:17:42,705 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-12 19:17:42,705 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=72bd882f068d3ec881af007402997362, ASSIGN}] 2023-07-12 19:17:42,707 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=72bd882f068d3ec881af007402997362, ASSIGN 2023-07-12 19:17:42,708 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=72bd882f068d3ec881af007402997362, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38393,1689189460532; forceNewPlan=false, retain=false 2023-07-12 19:17:42,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 19:17:42,858 INFO [jenkins-hbase20:33451] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-12 19:17:42,859 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=72bd882f068d3ec881af007402997362, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:42,860 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689189462654.72bd882f068d3ec881af007402997362.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189462859"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189462859"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189462859"}]},"ts":"1689189462859"} 2023-07-12 19:17:42,861 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 72bd882f068d3ec881af007402997362, server=jenkins-hbase20.apache.org,38393,1689189460532}] 2023-07-12 19:17:42,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 19:17:43,016 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:43,016 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-12 19:17:43,021 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45796, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-12 19:17:43,049 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:43,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 72bd882f068d3ec881af007402997362, NAME => 't1,,1689189462654.72bd882f068d3ec881af007402997362.', STARTKEY => '', ENDKEY => ''} 2023-07-12 19:17:43,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated t1,,1689189462654.72bd882f068d3ec881af007402997362.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-12 19:17:43,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,052 INFO [StoreOpener-72bd882f068d3ec881af007402997362-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,053 DEBUG [StoreOpener-72bd882f068d3ec881af007402997362-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/default/t1/72bd882f068d3ec881af007402997362/cf1 2023-07-12 19:17:43,053 DEBUG [StoreOpener-72bd882f068d3ec881af007402997362-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/default/t1/72bd882f068d3ec881af007402997362/cf1 2023-07-12 19:17:43,054 INFO [StoreOpener-72bd882f068d3ec881af007402997362-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 72bd882f068d3ec881af007402997362 columnFamilyName cf1 2023-07-12 19:17:43,055 INFO [StoreOpener-72bd882f068d3ec881af007402997362-1] regionserver.HStore(310): Store=72bd882f068d3ec881af007402997362/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-12 19:17:43,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/default/t1/72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/default/t1/72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/default/t1/72bd882f068d3ec881af007402997362/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-12 19:17:43,064 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 72bd882f068d3ec881af007402997362; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10642160480, jitterRate=-0.008871570229530334}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-12 19:17:43,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 72bd882f068d3ec881af007402997362: 2023-07-12 19:17:43,065 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689189462654.72bd882f068d3ec881af007402997362., pid=14, masterSystemTime=1689189463016 2023-07-12 19:17:43,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:43,072 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=72bd882f068d3ec881af007402997362, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:43,072 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689189462654.72bd882f068d3ec881af007402997362.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189463072"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689189463072"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689189463072"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689189463072"}]},"ts":"1689189463072"} 2023-07-12 19:17:43,072 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:43,078 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-12 19:17:43,079 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 72bd882f068d3ec881af007402997362, server=jenkins-hbase20.apache.org,38393,1689189460532 in 213 msec 2023-07-12 19:17:43,087 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-12 19:17:43,087 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=72bd882f068d3ec881af007402997362, ASSIGN in 374 msec 2023-07-12 19:17:43,088 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-12 19:17:43,088 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189463088"}]},"ts":"1689189463088"} 2023-07-12 19:17:43,090 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-12 19:17:43,092 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-12 19:17:43,094 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 437 msec 2023-07-12 19:17:43,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-12 19:17:43,263 INFO [Listener at localhost.localdomain/40989] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-12 19:17:43,263 DEBUG [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-12 19:17:43,264 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:43,269 INFO [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-12 19:17:43,269 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:43,269 INFO [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-12 19:17:43,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-12 19:17:43,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-12 19:17:43,274 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-12 19:17:43,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-12 19:17:43,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 148.251.75.209:45826 deadline: 1689189523270, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-12 19:17:43,278 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:43,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=9 msec 2023-07-12 19:17:43,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:43,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:43,380 INFO [Listener at localhost.localdomain/40989] client.HBaseAdmin$15(890): Started disable of t1 2023-07-12 19:17:43,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable t1 2023-07-12 19:17:43,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-12 19:17:43,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 19:17:43,396 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189463396"}]},"ts":"1689189463396"} 2023-07-12 19:17:43,400 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-12 19:17:43,401 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-12 19:17:43,402 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=72bd882f068d3ec881af007402997362, UNASSIGN}] 2023-07-12 19:17:43,403 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=72bd882f068d3ec881af007402997362, UNASSIGN 2023-07-12 19:17:43,404 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=72bd882f068d3ec881af007402997362, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:43,404 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689189462654.72bd882f068d3ec881af007402997362.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189463404"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689189463404"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689189463404"}]},"ts":"1689189463404"} 2023-07-12 19:17:43,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 72bd882f068d3ec881af007402997362, server=jenkins-hbase20.apache.org,38393,1689189460532}] 2023-07-12 19:17:43,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 19:17:43,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 72bd882f068d3ec881af007402997362, disabling compactions & flushes 2023-07-12 19:17:43,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:43,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:43,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689189462654.72bd882f068d3ec881af007402997362. after waiting 0 ms 2023-07-12 19:17:43,560 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:43,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/default/t1/72bd882f068d3ec881af007402997362/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-12 19:17:43,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed t1,,1689189462654.72bd882f068d3ec881af007402997362. 2023-07-12 19:17:43,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 72bd882f068d3ec881af007402997362: 2023-07-12 19:17:43,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,567 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=72bd882f068d3ec881af007402997362, regionState=CLOSED 2023-07-12 19:17:43,567 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689189462654.72bd882f068d3ec881af007402997362.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689189463567"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689189463567"}]},"ts":"1689189463567"} 2023-07-12 19:17:43,570 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-12 19:17:43,570 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 72bd882f068d3ec881af007402997362, server=jenkins-hbase20.apache.org,38393,1689189460532 in 162 msec 2023-07-12 19:17:43,574 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-12 19:17:43,574 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=72bd882f068d3ec881af007402997362, UNASSIGN in 168 msec 2023-07-12 19:17:43,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689189463581"}]},"ts":"1689189463581"} 2023-07-12 19:17:43,582 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-12 19:17:43,584 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-12 19:17:43,586 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 204 msec 2023-07-12 19:17:43,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-12 19:17:43,692 INFO [Listener at localhost.localdomain/40989] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-12 19:17:43,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete t1 2023-07-12 19:17:43,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-12 19:17:43,695 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 19:17:43,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-12 19:17:43,696 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-12 19:17:43,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:43,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:43,699 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 19:17:43,700 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362/cf1, FileablePath, hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362/recovered.edits] 2023-07-12 19:17:43,705 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362/recovered.edits/4.seqid to hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/archive/data/default/t1/72bd882f068d3ec881af007402997362/recovered.edits/4.seqid 2023-07-12 19:17:43,705 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/.tmp/data/default/t1/72bd882f068d3ec881af007402997362 2023-07-12 19:17:43,705 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-12 19:17:43,707 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-12 19:17:43,709 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-12 19:17:43,710 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-12 19:17:43,712 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-12 19:17:43,712 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-12 19:17:43,713 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689189462654.72bd882f068d3ec881af007402997362.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689189463712"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:43,715 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-12 19:17:43,715 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 72bd882f068d3ec881af007402997362, NAME => 't1,,1689189462654.72bd882f068d3ec881af007402997362.', STARTKEY => '', ENDKEY => ''}] 2023-07-12 19:17:43,715 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-12 19:17:43,716 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689189463715"}]},"ts":"9223372036854775807"} 2023-07-12 19:17:43,717 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-12 19:17:43,730 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-12 19:17:43,732 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 38 msec 2023-07-12 19:17:43,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-12 19:17:43,801 INFO [Listener at localhost.localdomain/40989] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-12 19:17:43,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:43,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:43,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:43,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:43,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:43,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:43,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:43,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:43,814 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:43,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:43,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:43,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:43,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:43,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:43,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:43,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:45826 deadline: 1689190663838, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:43,838 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:43,842 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:43,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,844 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:43,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:43,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:43,862 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=570 (was 560) - Thread LEAK? -, OpenFileDescriptor=842 (was 839) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=511 (was 511), ProcessCount=170 (was 170), AvailableMemoryMB=3313 (was 3417) 2023-07-12 19:17:43,862 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-12 19:17:43,878 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=570, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=511, ProcessCount=170, AvailableMemoryMB=3312 2023-07-12 19:17:43,878 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-12 19:17:43,879 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-12 19:17:43,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:43,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:43,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:43,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:43,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:43,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:43,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:43,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:43,892 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:43,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:43,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:43,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:43,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:43,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:43,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:43,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45826 deadline: 1689190663904, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:43,905 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:43,907 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:43,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,908 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:43,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 19:17:43,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:43,910 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-12 19:17:43,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-12 19:17:43,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-12 19:17:43,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:43,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:43,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:43,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:43,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:43,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:43,926 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:43,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:43,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:43,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:43,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:43,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:43,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:43,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45826 deadline: 1689190663934, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:43,935 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:43,937 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:43,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,938 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:43,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:43,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:43,955 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572 (was 570) - Thread LEAK? -, OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=511 (was 511), ProcessCount=170 (was 170), AvailableMemoryMB=3312 (was 3312) 2023-07-12 19:17:43,955 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-12 19:17:43,975 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=511, ProcessCount=170, AvailableMemoryMB=3311 2023-07-12 19:17:43,975 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-12 19:17:43,975 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-12 19:17:43,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:43,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:43,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:43,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:43,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:43,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:43,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:43,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:43,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:43,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:43,989 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:43,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:43,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:43,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:43,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:44,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:44,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:44,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:44,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45826 deadline: 1689190664007, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:44,008 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:44,010 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:44,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,012 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:44,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:44,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:44,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:44,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:44,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:44,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:44,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:44,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:44,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:44,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:44,033 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:44,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:44,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:44,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:44,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:44,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:44,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:44,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45826 deadline: 1689190664066, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:44,067 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:44,069 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:44,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,071 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:44,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:44,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:44,089 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573 (was 572) - Thread LEAK? -, OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=511 (was 511), ProcessCount=170 (was 170), AvailableMemoryMB=3311 (was 3311) 2023-07-12 19:17:44,089 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-12 19:17:44,113 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=511, ProcessCount=170, AvailableMemoryMB=3310 2023-07-12 19:17:44,113 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-12 19:17:44,113 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-12 19:17:44,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:44,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:44,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:44,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:44,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:44,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:44,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:44,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:44,126 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:44,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:44,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:44,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:44,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:44,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:44,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:44,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45826 deadline: 1689190664139, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:44,140 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:44,141 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:44,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,143 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:44,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:44,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:44,144 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-12 19:17:44,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_foo 2023-07-12 19:17:44,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 19:17:44,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:44,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-12 19:17:44,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:44,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 19:17:44,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 19:17:44,179 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:44,182 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-12 19:17:44,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-12 19:17:44,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-12 19:17:44,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:44,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 148.251.75.209:45826 deadline: 1689190664277, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-12 19:17:44,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.HMaster$16(3053): Client=jenkins//148.251.75.209 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-12 19:17:44,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 19:17:44,299 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 19:17:44,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-12 19:17:44,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-12 19:17:44,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_anotherGroup 2023-07-12 19:17:44,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 19:17:44,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-12 19:17:44,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:44,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-12 19:17:44,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:44,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete Group_foo 2023-07-12 19:17:44,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,414 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,416 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 19:17:44,417 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,417 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-12 19:17:44,418 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-12 19:17:44,418 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,419 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-12 19:17:44,420 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-12 19:17:44,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-12 19:17:44,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-12 19:17:44,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-12 19:17:44,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:44,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-12 19:17:44,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:44,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:44,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 148.251.75.209:45826 deadline: 1689189524533, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-12 19:17:44,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:44,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:44,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:44,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:44,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:44,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_anotherGroup 2023-07-12 19:17:44,543 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 19:17:44,543 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-12 19:17:44,544 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:44,544 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-12 19:17:44,544 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 19:17:44,544 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-12 19:17:44,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:44,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-12 19:17:44,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:44,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-12 19:17:44,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-12 19:17:44,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-12 19:17:44,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-12 19:17:44,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-12 19:17:44,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-12 19:17:44,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-12 19:17:44,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-12 19:17:44,560 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-12 19:17:44,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-12 19:17:44,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-12 19:17:44,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-12 19:17:44,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-12 19:17:44,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-12 19:17:44,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33451] to rsgroup master 2023-07-12 19:17:44,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-12 19:17:44,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45826 deadline: 1689190664571, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. 2023-07-12 19:17:44,571 WARN [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-12 19:17:44,573 INFO [Listener at localhost.localdomain/40989] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-12 19:17:44,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-12 19:17:44,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-12 19:17:44,574 INFO [Listener at localhost.localdomain/40989] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:33397, jenkins-hbase20.apache.org:37939, jenkins-hbase20.apache.org:38393, jenkins-hbase20.apache.org:46241], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-12 19:17:44,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-12 19:17:44,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-12 19:17:44,594 INFO [Listener at localhost.localdomain/40989] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573 (was 573), OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=511 (was 511), ProcessCount=170 (was 170), AvailableMemoryMB=3300 (was 3310) 2023-07-12 19:17:44,594 WARN [Listener at localhost.localdomain/40989] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-12 19:17:44,594 INFO [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-12 19:17:44,594 INFO [Listener at localhost.localdomain/40989] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-12 19:17:44,594 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a4678cd to 127.0.0.1:50438 2023-07-12 19:17:44,594 DEBUG [Listener at localhost.localdomain/40989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,595 DEBUG [Listener at localhost.localdomain/40989] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-12 19:17:44,595 DEBUG [Listener at localhost.localdomain/40989] util.JVMClusterUtil(257): Found active master hash=1463810097, stopped=false 2023-07-12 19:17:44,595 DEBUG [Listener at localhost.localdomain/40989] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-12 19:17:44,595 DEBUG [Listener at localhost.localdomain/40989] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-12 19:17:44,595 INFO [Listener at localhost.localdomain/40989] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:44,597 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:44,597 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:44,597 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:44,597 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:44,597 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-12 19:17:44,598 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:44,598 INFO [Listener at localhost.localdomain/40989] procedure2.ProcedureExecutor(629): Stopping 2023-07-12 19:17:44,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:44,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:44,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:44,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:44,598 DEBUG [Listener at localhost.localdomain/40989] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58259eb7 to 127.0.0.1:50438 2023-07-12 19:17:44,598 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-12 19:17:44,598 DEBUG [Listener at localhost.localdomain/40989] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,598 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,38393,1689189460532' ***** 2023-07-12 19:17:44,598 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:44,599 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:44,599 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,33397,1689189460611' ***** 2023-07-12 19:17:44,600 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:44,600 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,46241,1689189460679' ***** 2023-07-12 19:17:44,600 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:44,600 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:44,601 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:44,601 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,37939,1689189462230' ***** 2023-07-12 19:17:44,602 INFO [Listener at localhost.localdomain/40989] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-12 19:17:44,603 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:44,603 INFO [RS:0;jenkins-hbase20:38393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@a6b2912{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:44,606 INFO [RS:0;jenkins-hbase20:38393] server.AbstractConnector(383): Stopped ServerConnector@181666e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:44,606 INFO [RS:2;jenkins-hbase20:46241] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@17f73ce5{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:44,606 INFO [RS:0;jenkins-hbase20:38393] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:44,606 INFO [RS:3;jenkins-hbase20:37939] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7a7e30e8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:44,606 INFO [RS:1;jenkins-hbase20:33397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6db27505{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-12 19:17:44,607 INFO [RS:0;jenkins-hbase20:38393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f62e854{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:44,607 INFO [RS:2;jenkins-hbase20:46241] server.AbstractConnector(383): Stopped ServerConnector@32e52fef{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:44,608 INFO [RS:1;jenkins-hbase20:33397] server.AbstractConnector(383): Stopped ServerConnector@4d47d773{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:44,609 INFO [RS:0;jenkins-hbase20:38393] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6cf75b18{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:44,608 INFO [RS:3;jenkins-hbase20:37939] server.AbstractConnector(383): Stopped ServerConnector@1dd9ac5b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:44,609 INFO [RS:1;jenkins-hbase20:33397] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:44,609 INFO [RS:2;jenkins-hbase20:46241] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:44,609 INFO [RS:3;jenkins-hbase20:37939] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:44,611 INFO [RS:1;jenkins-hbase20:33397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@77f2188c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:44,611 INFO [RS:2;jenkins-hbase20:46241] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@153040b5{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:44,612 INFO [RS:3;jenkins-hbase20:37939] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3df20a58{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:44,611 INFO [RS:0;jenkins-hbase20:38393] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:44,613 INFO [RS:2;jenkins-hbase20:46241] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@21511886{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:44,614 INFO [RS:0;jenkins-hbase20:38393] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:44,612 INFO [RS:1;jenkins-hbase20:33397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3442f331{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:44,614 INFO [RS:0;jenkins-hbase20:38393] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:44,614 INFO [RS:3;jenkins-hbase20:37939] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41783fd8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:44,614 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:44,614 INFO [RS:2;jenkins-hbase20:46241] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:44,614 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:44,614 INFO [RS:2;jenkins-hbase20:46241] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:44,614 DEBUG [RS:0;jenkins-hbase20:38393] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4776e5c8 to 127.0.0.1:50438 2023-07-12 19:17:44,615 INFO [RS:1;jenkins-hbase20:33397] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:44,615 INFO [RS:2;jenkins-hbase20:46241] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:44,615 INFO [RS:3;jenkins-hbase20:37939] regionserver.HeapMemoryManager(220): Stopping 2023-07-12 19:17:44,615 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:44,615 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:44,615 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:44,615 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-12 19:17:44,615 DEBUG [RS:2;jenkins-hbase20:46241] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x736147b0 to 127.0.0.1:50438 2023-07-12 19:17:44,615 INFO [RS:1;jenkins-hbase20:33397] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:44,615 DEBUG [RS:0;jenkins-hbase20:38393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,615 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38393,1689189460532; all regions closed. 2023-07-12 19:17:44,615 INFO [RS:1;jenkins-hbase20:33397] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:44,615 DEBUG [RS:2;jenkins-hbase20:46241] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,615 INFO [RS:3;jenkins-hbase20:37939] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-12 19:17:44,616 INFO [RS:2;jenkins-hbase20:46241] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:44,616 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(3305): Received CLOSE for cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:44,616 INFO [RS:2;jenkins-hbase20:46241] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:44,616 INFO [RS:3;jenkins-hbase20:37939] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-12 19:17:44,616 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:44,616 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,616 INFO [RS:2;jenkins-hbase20:46241] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:44,616 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(3305): Received CLOSE for 937a51bce1914656b47d8675dd63a3ef 2023-07-12 19:17:44,616 DEBUG [RS:3;jenkins-hbase20:37939] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5cc30725 to 127.0.0.1:50438 2023-07-12 19:17:44,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing cbc17255d82e0ee87a232158f33f4740, disabling compactions & flushes 2023-07-12 19:17:44,617 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:44,616 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-12 19:17:44,617 DEBUG [RS:1;jenkins-hbase20:33397] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x783acad3 to 127.0.0.1:50438 2023-07-12 19:17:44,617 DEBUG [RS:1;jenkins-hbase20:33397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,617 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-12 19:17:44,617 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1478): Online Regions={cbc17255d82e0ee87a232158f33f4740=hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740., 937a51bce1914656b47d8675dd63a3ef=hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef.} 2023-07-12 19:17:44,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:44,617 DEBUG [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1504): Waiting on 937a51bce1914656b47d8675dd63a3ef, cbc17255d82e0ee87a232158f33f4740 2023-07-12 19:17:44,617 DEBUG [RS:3;jenkins-hbase20:37939] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,617 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37939,1689189462230; all regions closed. 2023-07-12 19:17:44,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:44,617 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-12 19:17:44,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-12 19:17:44,617 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-12 19:17:44,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-12 19:17:44,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. after waiting 0 ms 2023-07-12 19:17:44,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-12 19:17:44,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-12 19:17:44,618 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-12 19:17:44,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.82 KB 2023-07-12 19:17:44,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:44,619 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,618 DEBUG [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-12 19:17:44,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing cbc17255d82e0ee87a232158f33f4740 1/1 column families, dataSize=6.53 KB heapSize=10.82 KB 2023-07-12 19:17:44,625 DEBUG [RS:0;jenkins-hbase20:38393] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs 2023-07-12 19:17:44,625 INFO [RS:0;jenkins-hbase20:38393] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C38393%2C1689189460532:(num 1689189461569) 2023-07-12 19:17:44,625 DEBUG [RS:0;jenkins-hbase20:38393] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,625 INFO [RS:0;jenkins-hbase20:38393] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,625 INFO [RS:0;jenkins-hbase20:38393] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:44,625 INFO [RS:0;jenkins-hbase20:38393] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:44,625 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:44,625 INFO [RS:0;jenkins-hbase20:38393] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:44,625 INFO [RS:0;jenkins-hbase20:38393] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:44,626 INFO [RS:0;jenkins-hbase20:38393] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38393 2023-07-12 19:17:44,627 DEBUG [RS:3;jenkins-hbase20:37939] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs 2023-07-12 19:17:44,627 INFO [RS:3;jenkins-hbase20:37939] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C37939%2C1689189462230:(num 1689189462564) 2023-07-12 19:17:44,627 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:44,627 DEBUG [RS:3;jenkins-hbase20:37939] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,628 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:44,628 INFO [RS:3;jenkins-hbase20:37939] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,627 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,628 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:44,628 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,628 INFO [RS:3;jenkins-hbase20:37939] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:44,628 INFO [RS:3;jenkins-hbase20:37939] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:44,627 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38393,1689189460532 2023-07-12 19:17:44,628 INFO [RS:3;jenkins-hbase20:37939] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:44,628 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:44,628 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,628 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,628 INFO [RS:3;jenkins-hbase20:37939] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:44,628 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,629 INFO [RS:3;jenkins-hbase20:37939] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37939 2023-07-12 19:17:44,630 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38393,1689189460532] 2023-07-12 19:17:44,630 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38393,1689189460532; numProcessing=1 2023-07-12 19:17:44,630 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,630 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:44,630 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:44,630 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37939,1689189462230 2023-07-12 19:17:44,630 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38393,1689189460532 already deleted, retry=false 2023-07-12 19:17:44,630 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38393,1689189460532 expired; onlineServers=3 2023-07-12 19:17:44,631 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,37939,1689189462230] 2023-07-12 19:17:44,631 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,37939,1689189462230; numProcessing=2 2023-07-12 19:17:44,632 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,37939,1689189462230 already deleted, retry=false 2023-07-12 19:17:44,632 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,37939,1689189462230 expired; onlineServers=2 2023-07-12 19:17:44,635 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/.tmp/info/5c77ea6387a146bd9ba8862d053e8d4a 2023-07-12 19:17:44,639 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c77ea6387a146bd9ba8862d053e8d4a 2023-07-12 19:17:44,643 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.53 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/.tmp/m/bfff3b21a631408bb886d49af3ddff9f 2023-07-12 19:17:44,645 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bfff3b21a631408bb886d49af3ddff9f 2023-07-12 19:17:44,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/.tmp/m/bfff3b21a631408bb886d49af3ddff9f as hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/m/bfff3b21a631408bb886d49af3ddff9f 2023-07-12 19:17:44,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bfff3b21a631408bb886d49af3ddff9f 2023-07-12 19:17:44,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/m/bfff3b21a631408bb886d49af3ddff9f, entries=12, sequenceid=29, filesize=5.5 K 2023-07-12 19:17:44,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.53 KB/6685, heapSize ~10.80 KB/11064, currentSize=0 B/0 for cbc17255d82e0ee87a232158f33f4740 in 45ms, sequenceid=29, compaction requested=false 2023-07-12 19:17:44,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-12 19:17:44,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/.tmp/rep_barrier/2fdfd254a9b2470e8147261b8c188328 2023-07-12 19:17:44,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/rsgroup/cbc17255d82e0ee87a232158f33f4740/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-12 19:17:44,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:44,673 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:44,673 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for cbc17255d82e0ee87a232158f33f4740: 2023-07-12 19:17:44,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689189461710.cbc17255d82e0ee87a232158f33f4740. 2023-07-12 19:17:44,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 937a51bce1914656b47d8675dd63a3ef, disabling compactions & flushes 2023-07-12 19:17:44,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:44,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:44,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. after waiting 0 ms 2023-07-12 19:17:44,674 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:44,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 937a51bce1914656b47d8675dd63a3ef 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-12 19:17:44,678 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2fdfd254a9b2470e8147261b8c188328 2023-07-12 19:17:44,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/.tmp/info/17e72bee958846cead31af337d06edea 2023-07-12 19:17:44,700 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/.tmp/table/b541e0764c7c423095b4a8697b096141 2023-07-12 19:17:44,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 17e72bee958846cead31af337d06edea 2023-07-12 19:17:44,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/.tmp/info/17e72bee958846cead31af337d06edea as hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/info/17e72bee958846cead31af337d06edea 2023-07-12 19:17:44,709 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b541e0764c7c423095b4a8697b096141 2023-07-12 19:17:44,711 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/.tmp/info/5c77ea6387a146bd9ba8862d053e8d4a as hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/info/5c77ea6387a146bd9ba8862d053e8d4a 2023-07-12 19:17:44,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 17e72bee958846cead31af337d06edea 2023-07-12 19:17:44,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/info/17e72bee958846cead31af337d06edea, entries=3, sequenceid=9, filesize=5.0 K 2023-07-12 19:17:44,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 937a51bce1914656b47d8675dd63a3ef in 40ms, sequenceid=9, compaction requested=false 2023-07-12 19:17:44,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-12 19:17:44,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c77ea6387a146bd9ba8862d053e8d4a 2023-07-12 19:17:44,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/info/5c77ea6387a146bd9ba8862d053e8d4a, entries=22, sequenceid=26, filesize=7.3 K 2023-07-12 19:17:44,722 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/.tmp/rep_barrier/2fdfd254a9b2470e8147261b8c188328 as hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/rep_barrier/2fdfd254a9b2470e8147261b8c188328 2023-07-12 19:17:44,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/namespace/937a51bce1914656b47d8675dd63a3ef/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-12 19:17:44,727 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:44,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 937a51bce1914656b47d8675dd63a3ef: 2023-07-12 19:17:44,727 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689189461733.937a51bce1914656b47d8675dd63a3ef. 2023-07-12 19:17:44,729 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2fdfd254a9b2470e8147261b8c188328 2023-07-12 19:17:44,729 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/rep_barrier/2fdfd254a9b2470e8147261b8c188328, entries=1, sequenceid=26, filesize=4.9 K 2023-07-12 19:17:44,731 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/.tmp/table/b541e0764c7c423095b4a8697b096141 as hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/table/b541e0764c7c423095b4a8697b096141 2023-07-12 19:17:44,740 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b541e0764c7c423095b4a8697b096141 2023-07-12 19:17:44,740 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/table/b541e0764c7c423095b4a8697b096141, entries=6, sequenceid=26, filesize=5.1 K 2023-07-12 19:17:44,741 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4621, heapSize ~8.77 KB/8984, currentSize=0 B/0 for 1588230740 in 123ms, sequenceid=26, compaction requested=false 2023-07-12 19:17:44,741 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-12 19:17:44,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-12 19:17:44,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-12 19:17:44,756 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:44,757 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-12 19:17:44,757 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-12 19:17:44,798 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:44,798 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:37939-0x100829e263a000b, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:44,798 INFO [RS:3;jenkins-hbase20:37939] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37939,1689189462230; zookeeper connection closed. 2023-07-12 19:17:44,798 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4433078b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4433078b 2023-07-12 19:17:44,817 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33397,1689189460611; all regions closed. 2023-07-12 19:17:44,819 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46241,1689189460679; all regions closed. 2023-07-12 19:17:44,824 DEBUG [RS:1;jenkins-hbase20:33397] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs 2023-07-12 19:17:44,824 INFO [RS:1;jenkins-hbase20:33397] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C33397%2C1689189460611:(num 1689189461570) 2023-07-12 19:17:44,824 DEBUG [RS:1;jenkins-hbase20:33397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,824 INFO [RS:1;jenkins-hbase20:33397] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,825 INFO [RS:1;jenkins-hbase20:33397] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:44,825 INFO [RS:1;jenkins-hbase20:33397] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-12 19:17:44,825 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:44,825 INFO [RS:1;jenkins-hbase20:33397] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-12 19:17:44,825 DEBUG [RS:2;jenkins-hbase20:46241] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs 2023-07-12 19:17:44,825 INFO [RS:1;jenkins-hbase20:33397] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-12 19:17:44,825 INFO [RS:2;jenkins-hbase20:46241] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C46241%2C1689189460679.meta:.meta(num 1689189461549) 2023-07-12 19:17:44,826 INFO [RS:1;jenkins-hbase20:33397] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33397 2023-07-12 19:17:44,828 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:44,828 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,33397,1689189460611 2023-07-12 19:17:44,828 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,829 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,33397,1689189460611] 2023-07-12 19:17:44,830 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,33397,1689189460611; numProcessing=3 2023-07-12 19:17:44,830 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,33397,1689189460611 already deleted, retry=false 2023-07-12 19:17:44,830 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,33397,1689189460611 expired; onlineServers=1 2023-07-12 19:17:44,832 DEBUG [RS:2;jenkins-hbase20:46241] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/oldWALs 2023-07-12 19:17:44,832 INFO [RS:2;jenkins-hbase20:46241] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C46241%2C1689189460679:(num 1689189461553) 2023-07-12 19:17:44,832 DEBUG [RS:2;jenkins-hbase20:46241] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,832 INFO [RS:2;jenkins-hbase20:46241] regionserver.LeaseManager(133): Closed leases 2023-07-12 19:17:44,832 INFO [RS:2;jenkins-hbase20:46241] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-12 19:17:44,832 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:44,833 INFO [RS:2;jenkins-hbase20:46241] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46241 2023-07-12 19:17:44,834 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46241,1689189460679 2023-07-12 19:17:44,834 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-12 19:17:44,834 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,46241,1689189460679] 2023-07-12 19:17:44,835 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,46241,1689189460679; numProcessing=4 2023-07-12 19:17:44,835 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,46241,1689189460679 already deleted, retry=false 2023-07-12 19:17:44,835 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,46241,1689189460679 expired; onlineServers=0 2023-07-12 19:17:44,835 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,33451,1689189460437' ***** 2023-07-12 19:17:44,835 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-12 19:17:44,835 DEBUG [M:0;jenkins-hbase20:33451] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5300760e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-12 19:17:44,835 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-12 19:17:44,838 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-12 19:17:44,838 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-12 19:17:44,838 INFO [M:0;jenkins-hbase20:33451] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7bb54568{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-12 19:17:44,838 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-12 19:17:44,838 INFO [M:0;jenkins-hbase20:33451] server.AbstractConnector(383): Stopped ServerConnector@48b9cd26{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:44,838 INFO [M:0;jenkins-hbase20:33451] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-12 19:17:44,839 INFO [M:0;jenkins-hbase20:33451] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d11b748{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-12 19:17:44,840 INFO [M:0;jenkins-hbase20:33451] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@63f0285f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/hadoop.log.dir/,STOPPED} 2023-07-12 19:17:44,840 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33451,1689189460437 2023-07-12 19:17:44,840 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33451,1689189460437; all regions closed. 2023-07-12 19:17:44,840 DEBUG [M:0;jenkins-hbase20:33451] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-12 19:17:44,840 INFO [M:0;jenkins-hbase20:33451] master.HMaster(1491): Stopping master jetty server 2023-07-12 19:17:44,840 INFO [M:0;jenkins-hbase20:33451] server.AbstractConnector(383): Stopped ServerConnector@1063457c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-12 19:17:44,841 DEBUG [M:0;jenkins-hbase20:33451] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-12 19:17:44,841 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-12 19:17:44,841 DEBUG [M:0;jenkins-hbase20:33451] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-12 19:17:44,841 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189461095] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689189461095,5,FailOnTimeoutGroup] 2023-07-12 19:17:44,841 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189461094] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689189461094,5,FailOnTimeoutGroup] 2023-07-12 19:17:44,841 INFO [M:0;jenkins-hbase20:33451] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-12 19:17:44,841 INFO [M:0;jenkins-hbase20:33451] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-12 19:17:44,841 INFO [M:0;jenkins-hbase20:33451] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-12 19:17:44,841 DEBUG [M:0;jenkins-hbase20:33451] master.HMaster(1512): Stopping service threads 2023-07-12 19:17:44,841 INFO [M:0;jenkins-hbase20:33451] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-12 19:17:44,841 ERROR [M:0;jenkins-hbase20:33451] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-12 19:17:44,842 INFO [M:0;jenkins-hbase20:33451] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-12 19:17:44,842 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-12 19:17:44,842 DEBUG [M:0;jenkins-hbase20:33451] zookeeper.ZKUtil(398): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-12 19:17:44,842 WARN [M:0;jenkins-hbase20:33451] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-12 19:17:44,842 INFO [M:0;jenkins-hbase20:33451] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-12 19:17:44,842 INFO [M:0;jenkins-hbase20:33451] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-12 19:17:44,842 DEBUG [M:0;jenkins-hbase20:33451] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-12 19:17:44,842 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:44,842 DEBUG [M:0;jenkins-hbase20:33451] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:44,842 DEBUG [M:0;jenkins-hbase20:33451] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-12 19:17:44,842 DEBUG [M:0;jenkins-hbase20:33451] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:44,842 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.28 KB heapSize=90.73 KB 2023-07-12 19:17:44,852 INFO [M:0;jenkins-hbase20:33451] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.28 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/150726c31b134c73ab071747d8df2691 2023-07-12 19:17:44,857 DEBUG [M:0;jenkins-hbase20:33451] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/150726c31b134c73ab071747d8df2691 as hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/150726c31b134c73ab071747d8df2691 2023-07-12 19:17:44,863 INFO [M:0;jenkins-hbase20:33451] regionserver.HStore(1080): Added hdfs://localhost.localdomain:33609/user/jenkins/test-data/d6326029-fefc-c726-4805-b208dab29d96/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/150726c31b134c73ab071747d8df2691, entries=22, sequenceid=175, filesize=11.1 K 2023-07-12 19:17:44,864 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegion(2948): Finished flush of dataSize ~76.28 KB/78114, heapSize ~90.71 KB/92888, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-12 19:17:44,868 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-12 19:17:44,868 DEBUG [M:0;jenkins-hbase20:33451] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-12 19:17:44,871 INFO [M:0;jenkins-hbase20:33451] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-12 19:17:44,871 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-12 19:17:44,872 INFO [M:0;jenkins-hbase20:33451] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33451 2023-07-12 19:17:44,873 DEBUG [M:0;jenkins-hbase20:33451] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,33451,1689189460437 already deleted, retry=false 2023-07-12 19:17:44,898 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:44,898 INFO [RS:0;jenkins-hbase20:38393] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38393,1689189460532; zookeeper connection closed. 2023-07-12 19:17:44,898 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:38393-0x100829e263a0001, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:44,898 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@43fe24d6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@43fe24d6 2023-07-12 19:17:45,500 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:45,500 INFO [M:0;jenkins-hbase20:33451] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33451,1689189460437; zookeeper connection closed. 2023-07-12 19:17:45,500 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): master:33451-0x100829e263a0000, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:45,600 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:45,600 INFO [RS:2;jenkins-hbase20:46241] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46241,1689189460679; zookeeper connection closed. 2023-07-12 19:17:45,600 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:46241-0x100829e263a0003, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:45,600 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@55dba5ee] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@55dba5ee 2023-07-12 19:17:45,700 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:45,700 INFO [RS:1;jenkins-hbase20:33397] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33397,1689189460611; zookeeper connection closed. 2023-07-12 19:17:45,700 DEBUG [Listener at localhost.localdomain/40989-EventThread] zookeeper.ZKWatcher(600): regionserver:33397-0x100829e263a0002, quorum=127.0.0.1:50438, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-12 19:17:45,701 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@360ebb27] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@360ebb27 2023-07-12 19:17:45,701 INFO [Listener at localhost.localdomain/40989] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-12 19:17:45,701 WARN [Listener at localhost.localdomain/40989] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:45,705 INFO [Listener at localhost.localdomain/40989] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:45,810 WARN [BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:45,810 WARN [BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1420102587-148.251.75.209-1689189459628 (Datanode Uuid d00c467b-11db-4a39-af19-bc889094389b) service to localhost.localdomain/127.0.0.1:33609 2023-07-12 19:17:45,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data5/current/BP-1420102587-148.251.75.209-1689189459628] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:45,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data6/current/BP-1420102587-148.251.75.209-1689189459628] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:45,811 WARN [Listener at localhost.localdomain/40989] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:45,814 INFO [Listener at localhost.localdomain/40989] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:45,916 WARN [BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:45,916 WARN [BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1420102587-148.251.75.209-1689189459628 (Datanode Uuid 7e65f84b-b291-4657-bc00-13657b48b0d9) service to localhost.localdomain/127.0.0.1:33609 2023-07-12 19:17:45,917 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data3/current/BP-1420102587-148.251.75.209-1689189459628] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:45,917 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data4/current/BP-1420102587-148.251.75.209-1689189459628] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:45,918 WARN [Listener at localhost.localdomain/40989] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-12 19:17:45,921 INFO [Listener at localhost.localdomain/40989] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-12 19:17:46,024 WARN [BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-12 19:17:46,024 WARN [BP-1420102587-148.251.75.209-1689189459628 heartbeating to localhost.localdomain/127.0.0.1:33609] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1420102587-148.251.75.209-1689189459628 (Datanode Uuid 3b9e3151-5102-4a61-8446-70808c09da13) service to localhost.localdomain/127.0.0.1:33609 2023-07-12 19:17:46,025 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data1/current/BP-1420102587-148.251.75.209-1689189459628] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:46,026 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/ca399a11-a6b6-9528-4aae-916e50d2e9df/cluster_b74c3081-5cb1-0696-14e6-b0ce033fbceb/dfs/data/data2/current/BP-1420102587-148.251.75.209-1689189459628] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-12 19:17:46,038 INFO [Listener at localhost.localdomain/40989] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-12 19:17:46,152 INFO [Listener at localhost.localdomain/40989] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-12 19:17:46,179 INFO [Listener at localhost.localdomain/40989] hbase.HBaseTestingUtility(1293): Minicluster is down