2023-07-17 11:14:58,325 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5 2023-07-17 11:14:58,347 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-17 11:14:58,371 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-17 11:14:58,372 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67, deleteOnExit=true 2023-07-17 11:14:58,372 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-17 11:14:58,372 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/test.cache.data in system properties and HBase conf 2023-07-17 11:14:58,373 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.tmp.dir in system properties and HBase conf 2023-07-17 11:14:58,374 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir in system properties and HBase conf 2023-07-17 11:14:58,374 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-17 11:14:58,375 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-17 11:14:58,375 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-17 11:14:58,551 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-17 11:14:59,087 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-17 11:14:59,094 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-17 11:14:59,094 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-17 11:14:59,095 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-17 11:14:59,095 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 11:14:59,096 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-17 11:14:59,096 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-17 11:14:59,097 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 11:14:59,097 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 11:14:59,098 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-17 11:14:59,098 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/nfs.dump.dir in system properties and HBase conf 2023-07-17 11:14:59,099 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/java.io.tmpdir in system properties and HBase conf 2023-07-17 11:14:59,099 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 11:14:59,099 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-17 11:14:59,100 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-17 11:14:59,649 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 11:14:59,654 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 11:15:00,051 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-17 11:15:00,245 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-17 11:15:00,259 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:00,296 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:00,328 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/java.io.tmpdir/Jetty_localhost_35367_hdfs____qb03ek/webapp 2023-07-17 11:15:00,484 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35367 2023-07-17 11:15:00,531 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 11:15:00,531 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 11:15:01,025 WARN [Listener at localhost/41739] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:01,100 WARN [Listener at localhost/41739] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:01,119 WARN [Listener at localhost/41739] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:01,125 INFO [Listener at localhost/41739] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:01,131 INFO [Listener at localhost/41739] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/java.io.tmpdir/Jetty_localhost_35339_datanode____pm528u/webapp 2023-07-17 11:15:01,230 INFO [Listener at localhost/41739] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35339 2023-07-17 11:15:01,648 WARN [Listener at localhost/46541] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:01,707 WARN [Listener at localhost/46541] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:01,711 WARN [Listener at localhost/46541] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:01,712 INFO [Listener at localhost/46541] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:01,718 INFO [Listener at localhost/46541] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/java.io.tmpdir/Jetty_localhost_39897_datanode____50datz/webapp 2023-07-17 11:15:01,827 INFO [Listener at localhost/46541] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39897 2023-07-17 11:15:01,839 WARN [Listener at localhost/44951] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:01,871 WARN [Listener at localhost/44951] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:01,874 WARN [Listener at localhost/44951] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:01,876 INFO [Listener at localhost/44951] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:01,883 INFO [Listener at localhost/44951] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/java.io.tmpdir/Jetty_localhost_42397_datanode____wv4rzu/webapp 2023-07-17 11:15:02,023 INFO [Listener at localhost/44951] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42397 2023-07-17 11:15:02,042 WARN [Listener at localhost/45539] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:02,297 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x37c194d7f3bc8429: Processing first storage report for DS-48b75b58-ef13-4942-ab72-3dac38afc190 from datanode d231aabc-76cd-4220-8780-d6431b350fef 2023-07-17 11:15:02,299 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x37c194d7f3bc8429: from storage DS-48b75b58-ef13-4942-ab72-3dac38afc190 node DatanodeRegistration(127.0.0.1:44359, datanodeUuid=d231aabc-76cd-4220-8780-d6431b350fef, infoPort=39473, infoSecurePort=0, ipcPort=46541, storageInfo=lv=-57;cid=testClusterID;nsid=992917030;c=1689592499733), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-17 11:15:02,299 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa3d607c2d50f8898: Processing first storage report for DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9 from datanode ae5d8d85-83ef-4ff2-aba9-e83817d5c969 2023-07-17 11:15:02,300 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa3d607c2d50f8898: from storage DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9 node DatanodeRegistration(127.0.0.1:33373, datanodeUuid=ae5d8d85-83ef-4ff2-aba9-e83817d5c969, infoPort=33585, infoSecurePort=0, ipcPort=44951, storageInfo=lv=-57;cid=testClusterID;nsid=992917030;c=1689592499733), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-17 11:15:02,300 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe287baffa97fac16: Processing first storage report for DS-80f43b96-b809-42a0-a60d-6774be1cae92 from datanode a55b6569-fe9c-446c-b28f-3252356494e1 2023-07-17 11:15:02,300 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe287baffa97fac16: from storage DS-80f43b96-b809-42a0-a60d-6774be1cae92 node DatanodeRegistration(127.0.0.1:34065, datanodeUuid=a55b6569-fe9c-446c-b28f-3252356494e1, infoPort=39173, infoSecurePort=0, ipcPort=45539, storageInfo=lv=-57;cid=testClusterID;nsid=992917030;c=1689592499733), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:02,300 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x37c194d7f3bc8429: Processing first storage report for DS-8e7962d3-0086-41d0-b9ea-e7af60b17210 from datanode d231aabc-76cd-4220-8780-d6431b350fef 2023-07-17 11:15:02,300 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x37c194d7f3bc8429: from storage DS-8e7962d3-0086-41d0-b9ea-e7af60b17210 node DatanodeRegistration(127.0.0.1:44359, datanodeUuid=d231aabc-76cd-4220-8780-d6431b350fef, infoPort=39473, infoSecurePort=0, ipcPort=46541, storageInfo=lv=-57;cid=testClusterID;nsid=992917030;c=1689592499733), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:02,300 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa3d607c2d50f8898: Processing first storage report for DS-0254ff52-2bbb-4a0f-bcb9-e41f81c5bf54 from datanode ae5d8d85-83ef-4ff2-aba9-e83817d5c969 2023-07-17 11:15:02,300 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa3d607c2d50f8898: from storage DS-0254ff52-2bbb-4a0f-bcb9-e41f81c5bf54 node DatanodeRegistration(127.0.0.1:33373, datanodeUuid=ae5d8d85-83ef-4ff2-aba9-e83817d5c969, infoPort=33585, infoSecurePort=0, ipcPort=44951, storageInfo=lv=-57;cid=testClusterID;nsid=992917030;c=1689592499733), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:02,301 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe287baffa97fac16: Processing first storage report for DS-dff1d6b2-e7c3-499e-9791-e63393902bc7 from datanode a55b6569-fe9c-446c-b28f-3252356494e1 2023-07-17 11:15:02,301 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe287baffa97fac16: from storage DS-dff1d6b2-e7c3-499e-9791-e63393902bc7 node DatanodeRegistration(127.0.0.1:34065, datanodeUuid=a55b6569-fe9c-446c-b28f-3252356494e1, infoPort=39173, infoSecurePort=0, ipcPort=45539, storageInfo=lv=-57;cid=testClusterID;nsid=992917030;c=1689592499733), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:02,544 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5 2023-07-17 11:15:02,641 INFO [Listener at localhost/45539] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/zookeeper_0, clientPort=49750, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-17 11:15:02,671 INFO [Listener at localhost/45539] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=49750 2023-07-17 11:15:02,680 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:02,681 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:03,391 INFO [Listener at localhost/45539] util.FSUtils(471): Created version file at hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e with version=8 2023-07-17 11:15:03,391 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/hbase-staging 2023-07-17 11:15:03,400 DEBUG [Listener at localhost/45539] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-17 11:15:03,400 DEBUG [Listener at localhost/45539] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-17 11:15:03,400 DEBUG [Listener at localhost/45539] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-17 11:15:03,400 DEBUG [Listener at localhost/45539] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-17 11:15:03,785 INFO [Listener at localhost/45539] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-17 11:15:04,392 INFO [Listener at localhost/45539] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:04,444 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:04,445 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:04,445 INFO [Listener at localhost/45539] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:04,445 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:04,446 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:04,611 INFO [Listener at localhost/45539] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:04,715 DEBUG [Listener at localhost/45539] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-17 11:15:04,811 INFO [Listener at localhost/45539] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38451 2023-07-17 11:15:04,825 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:04,827 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:04,857 INFO [Listener at localhost/45539] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38451 connecting to ZooKeeper ensemble=127.0.0.1:49750 2023-07-17 11:15:04,911 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:384510x0, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:04,920 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38451-0x10172fe1c5e0000 connected 2023-07-17 11:15:04,956 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:04,957 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:04,960 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:04,973 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38451 2023-07-17 11:15:04,973 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38451 2023-07-17 11:15:04,974 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38451 2023-07-17 11:15:04,974 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38451 2023-07-17 11:15:04,974 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38451 2023-07-17 11:15:05,012 INFO [Listener at localhost/45539] log.Log(170): Logging initialized @7410ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-17 11:15:05,171 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:05,172 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:05,173 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:05,176 INFO [Listener at localhost/45539] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-17 11:15:05,176 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:05,176 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:05,182 INFO [Listener at localhost/45539] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:05,262 INFO [Listener at localhost/45539] http.HttpServer(1146): Jetty bound to port 34497 2023-07-17 11:15:05,265 INFO [Listener at localhost/45539] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:05,297 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,301 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7a39ade6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:05,301 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,302 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5320c268{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:05,368 INFO [Listener at localhost/45539] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:05,381 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:05,381 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:05,383 INFO [Listener at localhost/45539] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 11:15:05,391 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,419 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@64480317{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 11:15:05,431 INFO [Listener at localhost/45539] server.AbstractConnector(333): Started ServerConnector@71df00d8{HTTP/1.1, (http/1.1)}{0.0.0.0:34497} 2023-07-17 11:15:05,432 INFO [Listener at localhost/45539] server.Server(415): Started @7830ms 2023-07-17 11:15:05,436 INFO [Listener at localhost/45539] master.HMaster(444): hbase.rootdir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e, hbase.cluster.distributed=false 2023-07-17 11:15:05,528 INFO [Listener at localhost/45539] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:05,528 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,529 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,529 INFO [Listener at localhost/45539] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:05,529 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,529 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:05,537 INFO [Listener at localhost/45539] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:05,540 INFO [Listener at localhost/45539] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37409 2023-07-17 11:15:05,542 INFO [Listener at localhost/45539] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:05,550 DEBUG [Listener at localhost/45539] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:05,551 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:05,552 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:05,554 INFO [Listener at localhost/45539] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37409 connecting to ZooKeeper ensemble=127.0.0.1:49750 2023-07-17 11:15:05,558 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:374090x0, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:05,559 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:374090x0, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:05,564 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:374090x0, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:05,564 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37409-0x10172fe1c5e0001 connected 2023-07-17 11:15:05,565 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:05,566 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37409 2023-07-17 11:15:05,569 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37409 2023-07-17 11:15:05,569 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37409 2023-07-17 11:15:05,570 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37409 2023-07-17 11:15:05,574 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37409 2023-07-17 11:15:05,578 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:05,578 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:05,579 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:05,580 INFO [Listener at localhost/45539] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:05,580 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:05,581 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:05,581 INFO [Listener at localhost/45539] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:05,583 INFO [Listener at localhost/45539] http.HttpServer(1146): Jetty bound to port 42313 2023-07-17 11:15:05,584 INFO [Listener at localhost/45539] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:05,588 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,588 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6afee7fb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:05,589 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,589 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ce589e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:05,600 INFO [Listener at localhost/45539] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:05,601 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:05,601 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:05,602 INFO [Listener at localhost/45539] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 11:15:05,602 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,606 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e98cdce{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:05,607 INFO [Listener at localhost/45539] server.AbstractConnector(333): Started ServerConnector@6d447d66{HTTP/1.1, (http/1.1)}{0.0.0.0:42313} 2023-07-17 11:15:05,607 INFO [Listener at localhost/45539] server.Server(415): Started @8006ms 2023-07-17 11:15:05,620 INFO [Listener at localhost/45539] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:05,620 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,620 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,621 INFO [Listener at localhost/45539] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:05,621 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,621 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:05,621 INFO [Listener at localhost/45539] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:05,623 INFO [Listener at localhost/45539] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40489 2023-07-17 11:15:05,624 INFO [Listener at localhost/45539] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:05,625 DEBUG [Listener at localhost/45539] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:05,626 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:05,627 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:05,628 INFO [Listener at localhost/45539] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40489 connecting to ZooKeeper ensemble=127.0.0.1:49750 2023-07-17 11:15:05,631 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:404890x0, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:05,633 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40489-0x10172fe1c5e0002 connected 2023-07-17 11:15:05,633 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:05,633 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:05,634 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:05,635 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40489 2023-07-17 11:15:05,635 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40489 2023-07-17 11:15:05,635 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40489 2023-07-17 11:15:05,638 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40489 2023-07-17 11:15:05,638 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40489 2023-07-17 11:15:05,641 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:05,641 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:05,641 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:05,641 INFO [Listener at localhost/45539] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:05,642 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:05,642 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:05,642 INFO [Listener at localhost/45539] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:05,643 INFO [Listener at localhost/45539] http.HttpServer(1146): Jetty bound to port 43399 2023-07-17 11:15:05,643 INFO [Listener at localhost/45539] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:05,648 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,648 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ee296f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:05,648 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,649 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@35b16dd4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:05,656 INFO [Listener at localhost/45539] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:05,657 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:05,657 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:05,658 INFO [Listener at localhost/45539] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 11:15:05,659 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,659 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@66df3ef2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:05,660 INFO [Listener at localhost/45539] server.AbstractConnector(333): Started ServerConnector@44d8fb02{HTTP/1.1, (http/1.1)}{0.0.0.0:43399} 2023-07-17 11:15:05,660 INFO [Listener at localhost/45539] server.Server(415): Started @8059ms 2023-07-17 11:15:05,673 INFO [Listener at localhost/45539] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:05,674 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,674 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,674 INFO [Listener at localhost/45539] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:05,674 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:05,675 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:05,675 INFO [Listener at localhost/45539] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:05,679 INFO [Listener at localhost/45539] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39617 2023-07-17 11:15:05,679 INFO [Listener at localhost/45539] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:05,690 DEBUG [Listener at localhost/45539] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:05,692 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:05,693 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:05,695 INFO [Listener at localhost/45539] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39617 connecting to ZooKeeper ensemble=127.0.0.1:49750 2023-07-17 11:15:05,706 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:396170x0, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:05,711 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39617-0x10172fe1c5e0003 connected 2023-07-17 11:15:05,711 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:05,712 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:05,713 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:05,713 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39617 2023-07-17 11:15:05,714 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39617 2023-07-17 11:15:05,714 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39617 2023-07-17 11:15:05,714 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39617 2023-07-17 11:15:05,718 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39617 2023-07-17 11:15:05,721 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:05,721 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:05,721 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:05,722 INFO [Listener at localhost/45539] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:05,722 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:05,722 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:05,723 INFO [Listener at localhost/45539] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:05,724 INFO [Listener at localhost/45539] http.HttpServer(1146): Jetty bound to port 44437 2023-07-17 11:15:05,724 INFO [Listener at localhost/45539] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:05,731 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,731 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7605c194{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:05,732 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,732 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7c1f78c1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:05,741 INFO [Listener at localhost/45539] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:05,742 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:05,742 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:05,743 INFO [Listener at localhost/45539] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 11:15:05,744 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:05,744 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@470fdab8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:05,746 INFO [Listener at localhost/45539] server.AbstractConnector(333): Started ServerConnector@78d2fac4{HTTP/1.1, (http/1.1)}{0.0.0.0:44437} 2023-07-17 11:15:05,746 INFO [Listener at localhost/45539] server.Server(415): Started @8145ms 2023-07-17 11:15:05,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:05,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@2490749c{HTTP/1.1, (http/1.1)}{0.0.0.0:39999} 2023-07-17 11:15:05,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8154ms 2023-07-17 11:15:05,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:05,766 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 11:15:05,767 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:05,789 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:05,789 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:05,789 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:05,789 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:05,789 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:05,791 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:05,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38451,1689592503576 from backup master directory 2023-07-17 11:15:05,794 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:05,797 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:05,798 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 11:15:05,799 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:05,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:05,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-17 11:15:05,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-17 11:15:05,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/hbase.id with ID: 519eb990-c8d2-47f8-a629-2d5badc57f62 2023-07-17 11:15:05,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:05,977 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:06,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3db4d6ad to 127.0.0.1:49750 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:06,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bc2d34b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:06,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:06,156 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-17 11:15:06,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-17 11:15:06,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-17 11:15:06,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-17 11:15:06,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-17 11:15:06,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:06,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store-tmp 2023-07-17 11:15:06,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:06,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 11:15:06,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:06,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:06,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 11:15:06,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:06,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:06,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:06,267 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/WALs/jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:06,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38451%2C1689592503576, suffix=, logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/WALs/jenkins-hbase4.apache.org,38451,1689592503576, archiveDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/oldWALs, maxLogs=10 2023-07-17 11:15:06,355 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK] 2023-07-17 11:15:06,355 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK] 2023-07-17 11:15:06,355 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK] 2023-07-17 11:15:06,364 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-17 11:15:06,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/WALs/jenkins-hbase4.apache.org,38451,1689592503576/jenkins-hbase4.apache.org%2C38451%2C1689592503576.1689592506307 2023-07-17 11:15:06,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK], DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK], DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK]] 2023-07-17 11:15:06,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:06,457 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:06,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:06,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:06,549 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:06,557 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-17 11:15:06,590 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-17 11:15:06,603 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:06,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:06,611 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:06,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:06,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:06,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10957108320, jitterRate=0.020460233092308044}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:06,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:06,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-17 11:15:06,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-17 11:15:06,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-17 11:15:06,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-17 11:15:06,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-17 11:15:06,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 50 msec 2023-07-17 11:15:06,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-17 11:15:06,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-17 11:15:06,769 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-17 11:15:06,778 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-17 11:15:06,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-17 11:15:06,791 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-17 11:15:06,794 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:06,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-17 11:15:06,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-17 11:15:06,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-17 11:15:06,816 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:06,816 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:06,816 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:06,816 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:06,816 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:06,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38451,1689592503576, sessionid=0x10172fe1c5e0000, setting cluster-up flag (Was=false) 2023-07-17 11:15:06,837 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:06,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-17 11:15:06,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:06,849 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:06,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-17 11:15:06,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:06,860 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.hbase-snapshot/.tmp 2023-07-17 11:15:06,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-17 11:15:06,950 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(951): ClusterId : 519eb990-c8d2-47f8-a629-2d5badc57f62 2023-07-17 11:15:06,950 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(951): ClusterId : 519eb990-c8d2-47f8-a629-2d5badc57f62 2023-07-17 11:15:06,950 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(951): ClusterId : 519eb990-c8d2-47f8-a629-2d5badc57f62 2023-07-17 11:15:06,959 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:06,959 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:06,959 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:06,964 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-17 11:15:06,967 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:06,968 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:06,968 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:06,968 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:06,968 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:06,968 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:06,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-17 11:15:06,968 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:06,969 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-17 11:15:06,974 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:06,974 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:06,975 DEBUG [RS:2;jenkins-hbase4:39617] zookeeper.ReadOnlyZKClient(139): Connect 0x4a766cb7 to 127.0.0.1:49750 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:06,974 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:06,979 DEBUG [RS:1;jenkins-hbase4:40489] zookeeper.ReadOnlyZKClient(139): Connect 0x3ee96208 to 127.0.0.1:49750 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:06,979 DEBUG [RS:0;jenkins-hbase4:37409] zookeeper.ReadOnlyZKClient(139): Connect 0x014dbef2 to 127.0.0.1:49750 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:06,993 DEBUG [RS:0;jenkins-hbase4:37409] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f8a40ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:06,994 DEBUG [RS:1;jenkins-hbase4:40489] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@302e9ba5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:06,993 DEBUG [RS:2;jenkins-hbase4:39617] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44b19b89, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:06,994 DEBUG [RS:1;jenkins-hbase4:40489] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34110978, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:06,994 DEBUG [RS:0;jenkins-hbase4:37409] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@bb5d582, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:06,995 DEBUG [RS:2;jenkins-hbase4:39617] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58c91b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:07,024 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37409 2023-07-17 11:15:07,024 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40489 2023-07-17 11:15:07,027 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39617 2023-07-17 11:15:07,036 INFO [RS:1;jenkins-hbase4:40489] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:07,036 INFO [RS:0;jenkins-hbase4:37409] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:07,038 INFO [RS:0;jenkins-hbase4:37409] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:07,036 INFO [RS:2;jenkins-hbase4:39617] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:07,038 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:07,037 INFO [RS:1;jenkins-hbase4:40489] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:07,038 INFO [RS:2;jenkins-hbase4:39617] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:07,038 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:07,038 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:07,041 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38451,1689592503576 with isa=jenkins-hbase4.apache.org/172.31.14.131:40489, startcode=1689592505619 2023-07-17 11:15:07,041 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38451,1689592503576 with isa=jenkins-hbase4.apache.org/172.31.14.131:39617, startcode=1689592505673 2023-07-17 11:15:07,041 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38451,1689592503576 with isa=jenkins-hbase4.apache.org/172.31.14.131:37409, startcode=1689592505527 2023-07-17 11:15:07,062 DEBUG [RS:0;jenkins-hbase4:37409] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:07,062 DEBUG [RS:1;jenkins-hbase4:40489] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:07,062 DEBUG [RS:2;jenkins-hbase4:39617] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:07,094 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:07,144 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52703, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:07,147 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55413, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:07,146 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57445, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:07,155 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:07,168 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:07,169 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:07,176 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 11:15:07,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 11:15:07,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 11:15:07,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 11:15:07,183 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:07,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:07,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:07,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:07,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-17 11:15:07,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:07,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689592537186 2023-07-17 11:15:07,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-17 11:15:07,190 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(2830): Master is not running yet 2023-07-17 11:15:07,190 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(2830): Master is not running yet 2023-07-17 11:15:07,190 WARN [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-17 11:15:07,190 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(2830): Master is not running yet 2023-07-17 11:15:07,190 WARN [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-17 11:15:07,190 WARN [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-17 11:15:07,193 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-17 11:15:07,195 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:07,195 INFO [PEWorker-2] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-17 11:15:07,198 INFO [PEWorker-2] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:07,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-17 11:15:07,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-17 11:15:07,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-17 11:15:07,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-17 11:15:07,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-17 11:15:07,213 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-17 11:15:07,213 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-17 11:15:07,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-17 11:15:07,218 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-17 11:15:07,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592507220,5,FailOnTimeoutGroup] 2023-07-17 11:15:07,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592507220,5,FailOnTimeoutGroup] 2023-07-17 11:15:07,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-17 11:15:07,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,262 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:07,265 INFO [PEWorker-2] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:07,265 INFO [PEWorker-2] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e 2023-07-17 11:15:07,291 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38451,1689592503576 with isa=jenkins-hbase4.apache.org/172.31.14.131:37409, startcode=1689592505527 2023-07-17 11:15:07,291 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38451,1689592503576 with isa=jenkins-hbase4.apache.org/172.31.14.131:39617, startcode=1689592505673 2023-07-17 11:15:07,292 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38451,1689592503576 with isa=jenkins-hbase4.apache.org/172.31.14.131:40489, startcode=1689592505619 2023-07-17 11:15:07,297 DEBUG [PEWorker-2] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:07,298 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38451] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,300 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:07,302 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-17 11:15:07,302 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 11:15:07,305 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/info 2023-07-17 11:15:07,305 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38451] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,306 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:07,306 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-17 11:15:07,306 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 11:15:07,306 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e 2023-07-17 11:15:07,307 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41739 2023-07-17 11:15:07,307 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34497 2023-07-17 11:15:07,308 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38451] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,308 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e 2023-07-17 11:15:07,308 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:07,308 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41739 2023-07-17 11:15:07,309 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-17 11:15:07,309 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34497 2023-07-17 11:15:07,309 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:07,310 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 11:15:07,319 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e 2023-07-17 11:15:07,319 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41739 2023-07-17 11:15:07,319 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34497 2023-07-17 11:15:07,320 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:07,321 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 11:15:07,322 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:07,322 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 11:15:07,327 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/table 2023-07-17 11:15:07,328 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 11:15:07,329 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:07,330 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740 2023-07-17 11:15:07,331 DEBUG [PEWorker-2] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740 2023-07-17 11:15:07,332 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:07,333 DEBUG [RS:2;jenkins-hbase4:39617] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,333 WARN [RS:2;jenkins-hbase4:39617] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:07,333 INFO [RS:2;jenkins-hbase4:39617] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:07,333 DEBUG [RS:1;jenkins-hbase4:40489] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,333 DEBUG [RS:0;jenkins-hbase4:37409] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,333 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,333 WARN [RS:1;jenkins-hbase4:40489] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:07,333 WARN [RS:0;jenkins-hbase4:37409] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:07,336 DEBUG [PEWorker-2] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 11:15:07,338 DEBUG [PEWorker-2] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 11:15:07,335 INFO [RS:1;jenkins-hbase4:40489] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:07,343 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,336 INFO [RS:0;jenkins-hbase4:37409] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:07,344 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,348 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40489,1689592505619] 2023-07-17 11:15:07,349 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37409,1689592505527] 2023-07-17 11:15:07,349 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39617,1689592505673] 2023-07-17 11:15:07,351 DEBUG [PEWorker-2] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:07,354 INFO [PEWorker-2] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11450362400, jitterRate=0.06639809906482697}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 11:15:07,354 DEBUG [PEWorker-2] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 11:15:07,354 DEBUG [PEWorker-2] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 11:15:07,354 INFO [PEWorker-2] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 11:15:07,354 DEBUG [PEWorker-2] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 11:15:07,354 DEBUG [PEWorker-2] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 11:15:07,355 DEBUG [PEWorker-2] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 11:15:07,356 INFO [PEWorker-2] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:07,356 DEBUG [PEWorker-2] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 11:15:07,359 DEBUG [RS:0;jenkins-hbase4:37409] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,360 DEBUG [RS:0;jenkins-hbase4:37409] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,361 DEBUG [RS:0;jenkins-hbase4:37409] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,362 DEBUG [RS:2;jenkins-hbase4:39617] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,364 DEBUG [RS:2;jenkins-hbase4:39617] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,365 DEBUG [RS:2;jenkins-hbase4:39617] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,365 DEBUG [RS:1;jenkins-hbase4:40489] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,365 DEBUG [PEWorker-2] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:07,365 INFO [PEWorker-2] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-17 11:15:07,365 DEBUG [RS:1;jenkins-hbase4:40489] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,366 DEBUG [RS:1;jenkins-hbase4:40489] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,376 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:07,376 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:07,376 DEBUG [RS:0;jenkins-hbase4:37409] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:07,378 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-17 11:15:07,388 INFO [RS:1;jenkins-hbase4:40489] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:07,388 INFO [RS:0;jenkins-hbase4:37409] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:07,389 INFO [RS:2;jenkins-hbase4:39617] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:07,392 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-17 11:15:07,396 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-17 11:15:07,455 INFO [RS:2;jenkins-hbase4:39617] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:07,455 INFO [RS:0;jenkins-hbase4:37409] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:07,455 INFO [RS:1;jenkins-hbase4:40489] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:07,462 INFO [RS:0;jenkins-hbase4:37409] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:07,462 INFO [RS:2;jenkins-hbase4:39617] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:07,462 INFO [RS:0;jenkins-hbase4:37409] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,463 INFO [RS:2;jenkins-hbase4:39617] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,462 INFO [RS:1;jenkins-hbase4:40489] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:07,463 INFO [RS:1;jenkins-hbase4:40489] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,464 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:07,464 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:07,464 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:07,473 INFO [RS:0;jenkins-hbase4:37409] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,473 INFO [RS:1;jenkins-hbase4:40489] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,473 INFO [RS:2;jenkins-hbase4:39617] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,474 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,474 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,474 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,474 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,474 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,474 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:07,475 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:1;jenkins-hbase4:40489] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,475 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,474 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,476 DEBUG [RS:2;jenkins-hbase4:39617] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,476 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,476 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,477 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:07,477 INFO [RS:1;jenkins-hbase4:40489] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,477 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,477 INFO [RS:1;jenkins-hbase4:40489] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,477 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,477 INFO [RS:1;jenkins-hbase4:40489] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,477 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,477 DEBUG [RS:0;jenkins-hbase4:37409] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:07,483 INFO [RS:2;jenkins-hbase4:39617] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,483 INFO [RS:2;jenkins-hbase4:39617] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,483 INFO [RS:2;jenkins-hbase4:39617] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,487 INFO [RS:0;jenkins-hbase4:37409] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,487 INFO [RS:0;jenkins-hbase4:37409] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,487 INFO [RS:0;jenkins-hbase4:37409] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,497 INFO [RS:1;jenkins-hbase4:40489] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:07,499 INFO [RS:2;jenkins-hbase4:39617] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:07,502 INFO [RS:1;jenkins-hbase4:40489] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40489,1689592505619-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,502 INFO [RS:2;jenkins-hbase4:39617] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39617,1689592505673-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,503 INFO [RS:0;jenkins-hbase4:37409] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:07,503 INFO [RS:0;jenkins-hbase4:37409] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37409,1689592505527-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:07,527 INFO [RS:1;jenkins-hbase4:40489] regionserver.Replication(203): jenkins-hbase4.apache.org,40489,1689592505619 started 2023-07-17 11:15:07,527 INFO [RS:2;jenkins-hbase4:39617] regionserver.Replication(203): jenkins-hbase4.apache.org,39617,1689592505673 started 2023-07-17 11:15:07,528 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40489,1689592505619, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40489, sessionid=0x10172fe1c5e0002 2023-07-17 11:15:07,528 INFO [RS:0;jenkins-hbase4:37409] regionserver.Replication(203): jenkins-hbase4.apache.org,37409,1689592505527 started 2023-07-17 11:15:07,528 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37409,1689592505527, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37409, sessionid=0x10172fe1c5e0001 2023-07-17 11:15:07,528 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39617,1689592505673, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39617, sessionid=0x10172fe1c5e0003 2023-07-17 11:15:07,528 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:07,528 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:07,528 DEBUG [RS:0;jenkins-hbase4:37409] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,528 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:07,529 DEBUG [RS:0;jenkins-hbase4:37409] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37409,1689592505527' 2023-07-17 11:15:07,528 DEBUG [RS:2;jenkins-hbase4:39617] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,529 DEBUG [RS:0;jenkins-hbase4:37409] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:07,529 DEBUG [RS:1;jenkins-hbase4:40489] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,529 DEBUG [RS:2;jenkins-hbase4:39617] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39617,1689592505673' 2023-07-17 11:15:07,530 DEBUG [RS:1;jenkins-hbase4:40489] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40489,1689592505619' 2023-07-17 11:15:07,530 DEBUG [RS:1;jenkins-hbase4:40489] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:07,530 DEBUG [RS:2;jenkins-hbase4:39617] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:07,531 DEBUG [RS:0;jenkins-hbase4:37409] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:07,531 DEBUG [RS:1;jenkins-hbase4:40489] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:07,531 DEBUG [RS:2;jenkins-hbase4:39617] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:07,531 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:07,532 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:07,532 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:07,532 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:07,532 DEBUG [RS:1;jenkins-hbase4:40489] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,533 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:07,532 DEBUG [RS:0;jenkins-hbase4:37409] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:07,534 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:07,534 DEBUG [RS:1;jenkins-hbase4:40489] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40489,1689592505619' 2023-07-17 11:15:07,534 DEBUG [RS:1;jenkins-hbase4:40489] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:07,534 DEBUG [RS:2;jenkins-hbase4:39617] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:07,534 DEBUG [RS:2;jenkins-hbase4:39617] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39617,1689592505673' 2023-07-17 11:15:07,534 DEBUG [RS:2;jenkins-hbase4:39617] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:07,534 DEBUG [RS:0;jenkins-hbase4:37409] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37409,1689592505527' 2023-07-17 11:15:07,534 DEBUG [RS:0;jenkins-hbase4:37409] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:07,535 DEBUG [RS:0;jenkins-hbase4:37409] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:07,535 DEBUG [RS:1;jenkins-hbase4:40489] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:07,535 DEBUG [RS:2;jenkins-hbase4:39617] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:07,536 DEBUG [RS:0;jenkins-hbase4:37409] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:07,536 DEBUG [RS:1;jenkins-hbase4:40489] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:07,536 INFO [RS:0;jenkins-hbase4:37409] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:07,536 INFO [RS:1;jenkins-hbase4:40489] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:07,536 DEBUG [RS:2;jenkins-hbase4:39617] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:07,536 INFO [RS:0;jenkins-hbase4:37409] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:07,536 INFO [RS:2;jenkins-hbase4:39617] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:07,536 INFO [RS:2;jenkins-hbase4:39617] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:07,536 INFO [RS:1;jenkins-hbase4:40489] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:07,548 DEBUG [jenkins-hbase4:38451] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-17 11:15:07,568 DEBUG [jenkins-hbase4:38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:07,570 DEBUG [jenkins-hbase4:38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:07,570 DEBUG [jenkins-hbase4:38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:07,570 DEBUG [jenkins-hbase4:38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:07,570 DEBUG [jenkins-hbase4:38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:07,574 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40489,1689592505619, state=OPENING 2023-07-17 11:15:07,583 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-17 11:15:07,585 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:07,586 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 11:15:07,590 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:07,651 INFO [RS:2;jenkins-hbase4:39617] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39617%2C1689592505673, suffix=, logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,39617,1689592505673, archiveDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs, maxLogs=32 2023-07-17 11:15:07,651 INFO [RS:0;jenkins-hbase4:37409] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37409%2C1689592505527, suffix=, logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,37409,1689592505527, archiveDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs, maxLogs=32 2023-07-17 11:15:07,651 INFO [RS:1;jenkins-hbase4:40489] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40489%2C1689592505619, suffix=, logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,40489,1689592505619, archiveDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs, maxLogs=32 2023-07-17 11:15:07,689 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK] 2023-07-17 11:15:07,689 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK] 2023-07-17 11:15:07,689 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK] 2023-07-17 11:15:07,690 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK] 2023-07-17 11:15:07,690 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK] 2023-07-17 11:15:07,690 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK] 2023-07-17 11:15:07,691 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK] 2023-07-17 11:15:07,700 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK] 2023-07-17 11:15:07,700 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK] 2023-07-17 11:15:07,706 INFO [RS:2;jenkins-hbase4:39617] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,39617,1689592505673/jenkins-hbase4.apache.org%2C39617%2C1689592505673.1689592507657 2023-07-17 11:15:07,706 INFO [RS:1;jenkins-hbase4:40489] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,40489,1689592505619/jenkins-hbase4.apache.org%2C40489%2C1689592505619.1689592507657 2023-07-17 11:15:07,707 INFO [RS:0;jenkins-hbase4:37409] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,37409,1689592505527/jenkins-hbase4.apache.org%2C37409%2C1689592505527.1689592507657 2023-07-17 11:15:07,710 DEBUG [RS:2;jenkins-hbase4:39617] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK], DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK], DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK]] 2023-07-17 11:15:07,713 DEBUG [RS:1;jenkins-hbase4:40489] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK], DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK], DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK]] 2023-07-17 11:15:07,713 DEBUG [RS:0;jenkins-hbase4:37409] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK], DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK], DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK]] 2023-07-17 11:15:07,771 WARN [ReadOnlyZKClient-127.0.0.1:49750@0x3db4d6ad] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-17 11:15:07,777 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,779 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:07,783 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36966, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:07,795 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-17 11:15:07,795 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:07,799 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:07,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40489%2C1689592505619.meta, suffix=.meta, logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,40489,1689592505619, archiveDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs, maxLogs=32 2023-07-17 11:15:07,807 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36978, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:07,808 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40489] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:36978 deadline: 1689592567807, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:07,826 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK] 2023-07-17 11:15:07,826 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK] 2023-07-17 11:15:07,829 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK] 2023-07-17 11:15:07,836 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,40489,1689592505619/jenkins-hbase4.apache.org%2C40489%2C1689592505619.meta.1689592507807.meta 2023-07-17 11:15:07,837 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK], DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK], DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK]] 2023-07-17 11:15:07,837 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:07,840 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 11:15:07,844 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-17 11:15:07,846 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-17 11:15:07,853 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-17 11:15:07,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:07,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-17 11:15:07,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-17 11:15:07,857 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 11:15:07,859 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/info 2023-07-17 11:15:07,859 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/info 2023-07-17 11:15:07,859 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 11:15:07,860 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:07,861 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 11:15:07,862 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:07,862 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:07,863 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 11:15:07,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:07,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 11:15:07,865 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/table 2023-07-17 11:15:07,865 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/table 2023-07-17 11:15:07,866 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 11:15:07,866 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:07,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740 2023-07-17 11:15:07,871 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740 2023-07-17 11:15:07,875 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 11:15:07,877 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 11:15:07,882 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11514290080, jitterRate=0.07235182821750641}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 11:15:07,882 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 11:15:07,898 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689592507768 2023-07-17 11:15:07,920 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-17 11:15:07,920 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40489,1689592505619, state=OPEN 2023-07-17 11:15:07,922 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-17 11:15:07,924 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 11:15:07,924 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 11:15:07,928 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-17 11:15:07,928 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40489,1689592505619 in 334 msec 2023-07-17 11:15:07,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-17 11:15:07,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 553 msec 2023-07-17 11:15:07,949 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 958 msec 2023-07-17 11:15:07,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689592507949, completionTime=-1 2023-07-17 11:15:07,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-17 11:15:07,950 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-17 11:15:08,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-17 11:15:08,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689592568023 2023-07-17 11:15:08,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689592628023 2023-07-17 11:15:08,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 73 msec 2023-07-17 11:15:08,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38451,1689592503576-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:08,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38451,1689592503576-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:08,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38451,1689592503576-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:08,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38451, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:08,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:08,065 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-17 11:15:08,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-17 11:15:08,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:08,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-17 11:15:08,105 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:08,108 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:08,131 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,135 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689 empty. 2023-07-17 11:15:08,136 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,136 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-17 11:15:08,216 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:08,219 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6a4a58dee597d7e2caeeea613b990689, NAME => 'hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:08,238 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:08,238 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6a4a58dee597d7e2caeeea613b990689, disabling compactions & flushes 2023-07-17 11:15:08,238 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:08,238 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:08,238 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. after waiting 0 ms 2023-07-17 11:15:08,238 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:08,238 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:08,238 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6a4a58dee597d7e2caeeea613b990689: 2023-07-17 11:15:08,242 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:08,261 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592508245"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592508245"}]},"ts":"1689592508245"} 2023-07-17 11:15:08,291 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:08,292 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:08,297 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592508293"}]},"ts":"1689592508293"} 2023-07-17 11:15:08,301 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-17 11:15:08,305 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:08,305 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:08,305 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:08,305 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:08,305 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:08,307 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6a4a58dee597d7e2caeeea613b990689, ASSIGN}] 2023-07-17 11:15:08,311 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6a4a58dee597d7e2caeeea613b990689, ASSIGN 2023-07-17 11:15:08,312 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6a4a58dee597d7e2caeeea613b990689, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:08,323 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:08,325 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-17 11:15:08,327 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:08,329 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:08,333 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,333 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c empty. 2023-07-17 11:15:08,334 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,334 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-17 11:15:08,359 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:08,364 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d5111e6d7162bf03312675d4d0d3f80c, NAME => 'hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:08,389 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:08,389 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing d5111e6d7162bf03312675d4d0d3f80c, disabling compactions & flushes 2023-07-17 11:15:08,389 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:08,389 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:08,389 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. after waiting 0 ms 2023-07-17 11:15:08,389 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:08,389 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:08,389 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for d5111e6d7162bf03312675d4d0d3f80c: 2023-07-17 11:15:08,394 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:08,395 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592508395"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592508395"}]},"ts":"1689592508395"} 2023-07-17 11:15:08,401 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:08,403 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:08,403 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592508403"}]},"ts":"1689592508403"} 2023-07-17 11:15:08,409 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-17 11:15:08,417 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:08,417 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:08,417 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:08,417 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:08,417 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:08,418 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, ASSIGN}] 2023-07-17 11:15:08,420 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, ASSIGN 2023-07-17 11:15:08,421 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:08,422 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-17 11:15:08,424 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6a4a58dee597d7e2caeeea613b990689, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:08,424 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d5111e6d7162bf03312675d4d0d3f80c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:08,424 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592508424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592508424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592508424"}]},"ts":"1689592508424"} 2023-07-17 11:15:08,424 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592508424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592508424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592508424"}]},"ts":"1689592508424"} 2023-07-17 11:15:08,427 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 6a4a58dee597d7e2caeeea613b990689, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:08,428 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure d5111e6d7162bf03312675d4d0d3f80c, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:08,582 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:08,583 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:08,586 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35748, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:08,588 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:08,588 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6a4a58dee597d7e2caeeea613b990689, NAME => 'hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:08,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:08,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,592 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:08,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d5111e6d7162bf03312675d4d0d3f80c, NAME => 'hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:08,592 INFO [StoreOpener-6a4a58dee597d7e2caeeea613b990689-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 11:15:08,592 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. service=MultiRowMutationService 2023-07-17 11:15:08,593 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 11:15:08,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:08,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,593 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,594 DEBUG [StoreOpener-6a4a58dee597d7e2caeeea613b990689-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/info 2023-07-17 11:15:08,595 DEBUG [StoreOpener-6a4a58dee597d7e2caeeea613b990689-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/info 2023-07-17 11:15:08,596 INFO [StoreOpener-6a4a58dee597d7e2caeeea613b990689-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6a4a58dee597d7e2caeeea613b990689 columnFamilyName info 2023-07-17 11:15:08,596 INFO [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,596 INFO [StoreOpener-6a4a58dee597d7e2caeeea613b990689-1] regionserver.HStore(310): Store=6a4a58dee597d7e2caeeea613b990689/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:08,599 DEBUG [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m 2023-07-17 11:15:08,599 DEBUG [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m 2023-07-17 11:15:08,600 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,600 INFO [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d5111e6d7162bf03312675d4d0d3f80c columnFamilyName m 2023-07-17 11:15:08,600 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,601 INFO [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] regionserver.HStore(310): Store=d5111e6d7162bf03312675d4d0d3f80c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:08,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,603 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:08,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:08,609 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:08,610 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6a4a58dee597d7e2caeeea613b990689; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11088363040, jitterRate=0.03268428146839142}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:08,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6a4a58dee597d7e2caeeea613b990689: 2023-07-17 11:15:08,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:08,611 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d5111e6d7162bf03312675d4d0d3f80c; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5d5e164c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:08,611 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d5111e6d7162bf03312675d4d0d3f80c: 2023-07-17 11:15:08,612 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689., pid=8, masterSystemTime=1689592508580 2023-07-17 11:15:08,612 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c., pid=9, masterSystemTime=1689592508582 2023-07-17 11:15:08,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:08,616 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:08,618 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:08,619 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:08,619 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6a4a58dee597d7e2caeeea613b990689, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:08,619 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592508618"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592508618"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592508618"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592508618"}]},"ts":"1689592508618"} 2023-07-17 11:15:08,620 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d5111e6d7162bf03312675d4d0d3f80c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:08,620 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592508620"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592508620"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592508620"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592508620"}]},"ts":"1689592508620"} 2023-07-17 11:15:08,628 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-17 11:15:08,629 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 6a4a58dee597d7e2caeeea613b990689, server=jenkins-hbase4.apache.org,40489,1689592505619 in 197 msec 2023-07-17 11:15:08,632 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-17 11:15:08,632 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure d5111e6d7162bf03312675d4d0d3f80c, server=jenkins-hbase4.apache.org,39617,1689592505673 in 199 msec 2023-07-17 11:15:08,634 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-17 11:15:08,634 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6a4a58dee597d7e2caeeea613b990689, ASSIGN in 322 msec 2023-07-17 11:15:08,635 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:08,636 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592508635"}]},"ts":"1689592508635"} 2023-07-17 11:15:08,636 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-17 11:15:08,636 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, ASSIGN in 214 msec 2023-07-17 11:15:08,637 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:08,638 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592508638"}]},"ts":"1689592508638"} 2023-07-17 11:15:08,638 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-17 11:15:08,640 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-17 11:15:08,641 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:08,643 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:08,644 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 551 msec 2023-07-17 11:15:08,645 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 319 msec 2023-07-17 11:15:08,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-17 11:15:08,705 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:08,706 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:08,732 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:08,733 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:08,736 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-17 11:15:08,736 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-17 11:15:08,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-17 11:15:08,769 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:08,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 39 msec 2023-07-17 11:15:08,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-17 11:15:08,799 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:08,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-17 11:15:08,824 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-17 11:15:08,827 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:08,827 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:08,828 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-17 11:15:08,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.029sec 2023-07-17 11:15:08,831 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 11:15:08,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-17 11:15:08,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-17 11:15:08,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-17 11:15:08,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38451,1689592503576-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-17 11:15:08,836 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38451,1689592503576-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-17 11:15:08,839 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-17 11:15:08,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-17 11:15:08,856 DEBUG [Listener at localhost/45539] zookeeper.ReadOnlyZKClient(139): Connect 0x62c69654 to 127.0.0.1:49750 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:08,861 DEBUG [Listener at localhost/45539] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2c6d4550, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:08,875 DEBUG [hconnection-0x2c378da6-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:08,888 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:08,899 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:08,900 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:08,910 DEBUG [Listener at localhost/45539] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-17 11:15:08,913 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36004, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-17 11:15:08,927 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-17 11:15:08,927 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:08,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-17 11:15:08,932 DEBUG [Listener at localhost/45539] zookeeper.ReadOnlyZKClient(139): Connect 0x3604583d to 127.0.0.1:49750 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:08,938 DEBUG [Listener at localhost/45539] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78ef668c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:08,938 INFO [Listener at localhost/45539] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:49750 2023-07-17 11:15:08,942 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:08,943 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10172fe1c5e000a connected 2023-07-17 11:15:08,972 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=681, MaxFileDescriptor=60000, SystemLoadAverage=500, ProcessCount=172, AvailableMemoryMB=3402 2023-07-17 11:15:08,974 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-17 11:15:09,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:09,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:09,044 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-17 11:15:09,057 INFO [Listener at localhost/45539] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:09,057 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:09,057 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:09,057 INFO [Listener at localhost/45539] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:09,057 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:09,057 INFO [Listener at localhost/45539] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:09,058 INFO [Listener at localhost/45539] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:09,061 INFO [Listener at localhost/45539] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35719 2023-07-17 11:15:09,062 INFO [Listener at localhost/45539] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:09,063 DEBUG [Listener at localhost/45539] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:09,064 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:09,068 INFO [Listener at localhost/45539] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:09,070 INFO [Listener at localhost/45539] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35719 connecting to ZooKeeper ensemble=127.0.0.1:49750 2023-07-17 11:15:09,074 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:357190x0, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:09,075 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35719-0x10172fe1c5e000b connected 2023-07-17 11:15:09,075 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(162): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:09,077 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(162): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-17 11:15:09,078 DEBUG [Listener at localhost/45539] zookeeper.ZKUtil(164): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:09,079 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35719 2023-07-17 11:15:09,079 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35719 2023-07-17 11:15:09,079 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35719 2023-07-17 11:15:09,080 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35719 2023-07-17 11:15:09,080 DEBUG [Listener at localhost/45539] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35719 2023-07-17 11:15:09,082 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:09,082 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:09,082 INFO [Listener at localhost/45539] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:09,083 INFO [Listener at localhost/45539] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:09,083 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:09,083 INFO [Listener at localhost/45539] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:09,083 INFO [Listener at localhost/45539] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:09,084 INFO [Listener at localhost/45539] http.HttpServer(1146): Jetty bound to port 46023 2023-07-17 11:15:09,084 INFO [Listener at localhost/45539] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:09,085 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:09,085 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39ac2a37{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:09,085 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:09,086 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@e79409d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:09,093 INFO [Listener at localhost/45539] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:09,094 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:09,094 INFO [Listener at localhost/45539] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:09,094 INFO [Listener at localhost/45539] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 11:15:09,095 INFO [Listener at localhost/45539] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:09,096 INFO [Listener at localhost/45539] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@526688c5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:09,098 INFO [Listener at localhost/45539] server.AbstractConnector(333): Started ServerConnector@55286062{HTTP/1.1, (http/1.1)}{0.0.0.0:46023} 2023-07-17 11:15:09,098 INFO [Listener at localhost/45539] server.Server(415): Started @11497ms 2023-07-17 11:15:09,100 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(951): ClusterId : 519eb990-c8d2-47f8-a629-2d5badc57f62 2023-07-17 11:15:09,100 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:09,103 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:09,103 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:09,106 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:09,107 DEBUG [RS:3;jenkins-hbase4:35719] zookeeper.ReadOnlyZKClient(139): Connect 0x0eff2867 to 127.0.0.1:49750 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:09,112 DEBUG [RS:3;jenkins-hbase4:35719] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38d7c8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:09,112 DEBUG [RS:3;jenkins-hbase4:35719] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4074317, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:09,121 DEBUG [RS:3;jenkins-hbase4:35719] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:35719 2023-07-17 11:15:09,121 INFO [RS:3;jenkins-hbase4:35719] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:09,121 INFO [RS:3;jenkins-hbase4:35719] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:09,121 DEBUG [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:09,122 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38451,1689592503576 with isa=jenkins-hbase4.apache.org/172.31.14.131:35719, startcode=1689592509057 2023-07-17 11:15:09,123 DEBUG [RS:3;jenkins-hbase4:35719] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:09,127 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36387, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:09,127 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38451] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,127 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:09,128 DEBUG [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e 2023-07-17 11:15:09,128 DEBUG [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41739 2023-07-17 11:15:09,128 DEBUG [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34497 2023-07-17 11:15:09,134 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:09,135 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:09,135 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:09,135 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:09,135 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:09,136 DEBUG [RS:3;jenkins-hbase4:35719] zookeeper.ZKUtil(162): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,136 WARN [RS:3;jenkins-hbase4:35719] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:09,136 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35719,1689592509057] 2023-07-17 11:15:09,136 INFO [RS:3;jenkins-hbase4:35719] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:09,136 DEBUG [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,136 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 11:15:09,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:09,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:09,137 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:09,145 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38451,1689592503576] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-17 11:15:09,145 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,145 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,145 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,146 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,146 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,147 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,147 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,148 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,149 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,153 DEBUG [RS:3;jenkins-hbase4:35719] zookeeper.ZKUtil(162): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:09,154 DEBUG [RS:3;jenkins-hbase4:35719] zookeeper.ZKUtil(162): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,154 DEBUG [RS:3;jenkins-hbase4:35719] zookeeper.ZKUtil(162): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,155 DEBUG [RS:3;jenkins-hbase4:35719] zookeeper.ZKUtil(162): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,156 DEBUG [RS:3;jenkins-hbase4:35719] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:09,156 INFO [RS:3;jenkins-hbase4:35719] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:09,163 INFO [RS:3;jenkins-hbase4:35719] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:09,165 INFO [RS:3;jenkins-hbase4:35719] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:09,165 INFO [RS:3;jenkins-hbase4:35719] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:09,165 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:09,167 INFO [RS:3;jenkins-hbase4:35719] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,168 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,169 DEBUG [RS:3;jenkins-hbase4:35719] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:09,174 INFO [RS:3;jenkins-hbase4:35719] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:09,175 INFO [RS:3;jenkins-hbase4:35719] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:09,175 INFO [RS:3;jenkins-hbase4:35719] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:09,187 INFO [RS:3;jenkins-hbase4:35719] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:09,187 INFO [RS:3;jenkins-hbase4:35719] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35719,1689592509057-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:09,198 INFO [RS:3;jenkins-hbase4:35719] regionserver.Replication(203): jenkins-hbase4.apache.org,35719,1689592509057 started 2023-07-17 11:15:09,198 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35719,1689592509057, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35719, sessionid=0x10172fe1c5e000b 2023-07-17 11:15:09,198 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:09,198 DEBUG [RS:3;jenkins-hbase4:35719] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,198 DEBUG [RS:3;jenkins-hbase4:35719] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35719,1689592509057' 2023-07-17 11:15:09,198 DEBUG [RS:3;jenkins-hbase4:35719] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:09,198 DEBUG [RS:3;jenkins-hbase4:35719] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:09,199 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:09,199 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:09,199 DEBUG [RS:3;jenkins-hbase4:35719] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:09,199 DEBUG [RS:3;jenkins-hbase4:35719] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35719,1689592509057' 2023-07-17 11:15:09,199 DEBUG [RS:3;jenkins-hbase4:35719] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:09,200 DEBUG [RS:3;jenkins-hbase4:35719] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:09,200 DEBUG [RS:3;jenkins-hbase4:35719] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:09,200 INFO [RS:3;jenkins-hbase4:35719] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:09,200 INFO [RS:3;jenkins-hbase4:35719] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:09,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:09,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:09,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:09,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:09,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:09,215 DEBUG [hconnection-0x62be270e-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:09,219 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37004, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:09,223 DEBUG [hconnection-0x62be270e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:09,226 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35758, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:09,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:09,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:09,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:09,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:09,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36004 deadline: 1689593709237, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:09,239 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:09,241 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:09,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:09,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:09,243 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:09,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:09,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:09,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:09,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:09,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:09,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:09,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:09,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:09,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:09,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:09,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:09,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:09,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:09,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:09,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:09,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:09,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:09,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 11:15:09,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to default 2023-07-17 11:15:09,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:09,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:09,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:09,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:09,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:09,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:09,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:09,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:09,302 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:09,303 INFO [RS:3;jenkins-hbase4:35719] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35719%2C1689592509057, suffix=, logDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,35719,1689592509057, archiveDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs, maxLogs=32 2023-07-17 11:15:09,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-17 11:15:09,307 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:09,308 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:09,309 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:09,309 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:09,314 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:09,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:09,321 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,321 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,321 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,322 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,322 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d empty. 2023-07-17 11:15:09,325 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 empty. 2023-07-17 11:15:09,325 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,326 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,327 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff empty. 2023-07-17 11:15:09,327 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,328 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 empty. 2023-07-17 11:15:09,328 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf empty. 2023-07-17 11:15:09,330 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,331 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,330 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,337 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 11:15:09,341 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK] 2023-07-17 11:15:09,348 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK] 2023-07-17 11:15:09,351 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK] 2023-07-17 11:15:09,355 INFO [RS:3;jenkins-hbase4:35719] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,35719,1689592509057/jenkins-hbase4.apache.org%2C35719%2C1689592509057.1689592509306 2023-07-17 11:15:09,359 DEBUG [RS:3;jenkins-hbase4:35719] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44359,DS-48b75b58-ef13-4942-ab72-3dac38afc190,DISK], DatanodeInfoWithStorage[127.0.0.1:33373,DS-8dff280f-8f2a-4cd4-a5d8-f08bc923e8c9,DISK], DatanodeInfoWithStorage[127.0.0.1:34065,DS-80f43b96-b809-42a0-a60d-6774be1cae92,DISK]] 2023-07-17 11:15:09,377 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:09,382 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9955a2a8b9047c05bc8a065e0532382d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:09,382 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 034b6c36a538fbe7eaa2db45406b38cf, NAME => 'Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:09,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 7a6d1345ff4b94b9eca1daac256866c8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:09,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:09,444 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,445 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 9955a2a8b9047c05bc8a065e0532382d, disabling compactions & flushes 2023-07-17 11:15:09,445 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:09,445 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:09,445 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. after waiting 0 ms 2023-07-17 11:15:09,446 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:09,446 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:09,446 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 9955a2a8b9047c05bc8a065e0532382d: 2023-07-17 11:15:09,447 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2df67c90d80110e60e7f85f3c2b88fff, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:09,451 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,452 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 034b6c36a538fbe7eaa2db45406b38cf, disabling compactions & flushes 2023-07-17 11:15:09,452 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:09,452 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:09,452 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. after waiting 0 ms 2023-07-17 11:15:09,452 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:09,452 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:09,452 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 034b6c36a538fbe7eaa2db45406b38cf: 2023-07-17 11:15:09,453 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => c90225c53eac8fd8778ee6386583fc74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:09,454 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,456 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 7a6d1345ff4b94b9eca1daac256866c8, disabling compactions & flushes 2023-07-17 11:15:09,456 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:09,456 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:09,457 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. after waiting 0 ms 2023-07-17 11:15:09,457 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:09,457 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:09,457 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 7a6d1345ff4b94b9eca1daac256866c8: 2023-07-17 11:15:09,479 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,480 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing c90225c53eac8fd8778ee6386583fc74, disabling compactions & flushes 2023-07-17 11:15:09,480 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:09,480 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:09,480 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. after waiting 0 ms 2023-07-17 11:15:09,480 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:09,480 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:09,480 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for c90225c53eac8fd8778ee6386583fc74: 2023-07-17 11:15:09,481 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,481 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2df67c90d80110e60e7f85f3c2b88fff, disabling compactions & flushes 2023-07-17 11:15:09,481 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:09,481 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:09,481 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. after waiting 0 ms 2023-07-17 11:15:09,481 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:09,481 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:09,481 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2df67c90d80110e60e7f85f3c2b88fff: 2023-07-17 11:15:09,487 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:09,488 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509488"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592509488"}]},"ts":"1689592509488"} 2023-07-17 11:15:09,488 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592509488"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592509488"}]},"ts":"1689592509488"} 2023-07-17 11:15:09,489 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509488"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592509488"}]},"ts":"1689592509488"} 2023-07-17 11:15:09,489 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592509488"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592509488"}]},"ts":"1689592509488"} 2023-07-17 11:15:09,489 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509488"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592509488"}]},"ts":"1689592509488"} 2023-07-17 11:15:09,535 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-17 11:15:09,537 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:09,537 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592509537"}]},"ts":"1689592509537"} 2023-07-17 11:15:09,540 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-17 11:15:09,550 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:09,550 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:09,550 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:09,550 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:09,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, ASSIGN}] 2023-07-17 11:15:09,553 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, ASSIGN 2023-07-17 11:15:09,553 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, ASSIGN 2023-07-17 11:15:09,554 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, ASSIGN 2023-07-17 11:15:09,554 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, ASSIGN 2023-07-17 11:15:09,555 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:09,556 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:09,556 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:09,556 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:09,557 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, ASSIGN 2023-07-17 11:15:09,558 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:09,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:09,705 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 11:15:09,709 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,709 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,709 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,709 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,709 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509709"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592509709"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592509709"}]},"ts":"1689592509709"} 2023-07-17 11:15:09,709 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,709 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592509709"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592509709"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592509709"}]},"ts":"1689592509709"} 2023-07-17 11:15:09,709 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509709"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592509709"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592509709"}]},"ts":"1689592509709"} 2023-07-17 11:15:09,709 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509709"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592509709"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592509709"}]},"ts":"1689592509709"} 2023-07-17 11:15:09,709 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592509709"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592509709"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592509709"}]},"ts":"1689592509709"} 2023-07-17 11:15:09,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=14, state=RUNNABLE; OpenRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:09,713 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=17, state=RUNNABLE; OpenRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:09,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=15, state=RUNNABLE; OpenRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:09,717 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=16, state=RUNNABLE; OpenRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:09,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=13, state=RUNNABLE; OpenRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:09,873 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:09,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2df67c90d80110e60e7f85f3c2b88fff, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 11:15:09,874 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:09,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 034b6c36a538fbe7eaa2db45406b38cf, NAME => 'Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 11:15:09,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,876 INFO [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,876 INFO [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,878 DEBUG [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/f 2023-07-17 11:15:09,878 DEBUG [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/f 2023-07-17 11:15:09,878 DEBUG [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/f 2023-07-17 11:15:09,879 DEBUG [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/f 2023-07-17 11:15:09,879 INFO [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2df67c90d80110e60e7f85f3c2b88fff columnFamilyName f 2023-07-17 11:15:09,879 INFO [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 034b6c36a538fbe7eaa2db45406b38cf columnFamilyName f 2023-07-17 11:15:09,880 INFO [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] regionserver.HStore(310): Store=2df67c90d80110e60e7f85f3c2b88fff/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:09,881 INFO [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] regionserver.HStore(310): Store=034b6c36a538fbe7eaa2db45406b38cf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:09,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:09,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:09,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:09,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2df67c90d80110e60e7f85f3c2b88fff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11366512000, jitterRate=0.05858892202377319}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:09,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2df67c90d80110e60e7f85f3c2b88fff: 2023-07-17 11:15:09,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:09,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 034b6c36a538fbe7eaa2db45406b38cf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11431208160, jitterRate=0.06461422145366669}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:09,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 034b6c36a538fbe7eaa2db45406b38cf: 2023-07-17 11:15:09,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff., pid=21, masterSystemTime=1689592509867 2023-07-17 11:15:09,898 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf., pid=22, masterSystemTime=1689592509869 2023-07-17 11:15:09,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:09,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:09,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:09,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9955a2a8b9047c05bc8a065e0532382d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 11:15:09,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,901 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:09,901 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509901"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592509901"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592509901"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592509901"}]},"ts":"1689592509901"} 2023-07-17 11:15:09,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:09,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:09,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c90225c53eac8fd8778ee6386583fc74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 11:15:09,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,903 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,904 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592509903"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592509903"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592509903"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592509903"}]},"ts":"1689592509903"} 2023-07-17 11:15:09,907 INFO [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,913 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=16 2023-07-17 11:15:09,918 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, ASSIGN in 363 msec 2023-07-17 11:15:09,919 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=16, state=SUCCESS; OpenRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,39617,1689592505673 in 191 msec 2023-07-17 11:15:09,918 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=13 2023-07-17 11:15:09,920 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=13, state=SUCCESS; OpenRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,40489,1689592505619 in 193 msec 2023-07-17 11:15:09,920 INFO [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,921 DEBUG [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/f 2023-07-17 11:15:09,921 DEBUG [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/f 2023-07-17 11:15:09,922 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, ASSIGN in 368 msec 2023-07-17 11:15:09,922 INFO [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9955a2a8b9047c05bc8a065e0532382d columnFamilyName f 2023-07-17 11:15:09,922 DEBUG [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/f 2023-07-17 11:15:09,923 DEBUG [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/f 2023-07-17 11:15:09,923 INFO [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] regionserver.HStore(310): Store=9955a2a8b9047c05bc8a065e0532382d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:09,924 INFO [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c90225c53eac8fd8778ee6386583fc74 columnFamilyName f 2023-07-17 11:15:09,925 INFO [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] regionserver.HStore(310): Store=c90225c53eac8fd8778ee6386583fc74/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:09,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:09,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:09,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:09,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:09,942 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:09,943 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9955a2a8b9047c05bc8a065e0532382d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11705222080, jitterRate=0.09013375639915466}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:09,943 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c90225c53eac8fd8778ee6386583fc74; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11760412000, jitterRate=0.09527371823787689}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:09,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9955a2a8b9047c05bc8a065e0532382d: 2023-07-17 11:15:09,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c90225c53eac8fd8778ee6386583fc74: 2023-07-17 11:15:09,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d., pid=18, masterSystemTime=1689592509867 2023-07-17 11:15:09,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74., pid=19, masterSystemTime=1689592509869 2023-07-17 11:15:09,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:09,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:09,947 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:09,948 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592509947"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592509947"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592509947"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592509947"}]},"ts":"1689592509947"} 2023-07-17 11:15:09,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:09,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:09,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:09,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a6d1345ff4b94b9eca1daac256866c8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 11:15:09,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:09,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,950 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,951 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509950"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592509950"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592509950"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592509950"}]},"ts":"1689592509950"} 2023-07-17 11:15:09,953 INFO [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,956 DEBUG [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/f 2023-07-17 11:15:09,956 DEBUG [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/f 2023-07-17 11:15:09,957 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=17 2023-07-17 11:15:09,958 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=17, state=SUCCESS; OpenRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,40489,1689592505619 in 238 msec 2023-07-17 11:15:09,959 INFO [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a6d1345ff4b94b9eca1daac256866c8 columnFamilyName f 2023-07-17 11:15:09,960 INFO [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] regionserver.HStore(310): Store=7a6d1345ff4b94b9eca1daac256866c8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:09,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,963 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=14 2023-07-17 11:15:09,963 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=14, state=SUCCESS; OpenRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,39617,1689592505673 in 242 msec 2023-07-17 11:15:09,963 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, ASSIGN in 408 msec 2023-07-17 11:15:09,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,965 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, ASSIGN in 413 msec 2023-07-17 11:15:09,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:09,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:09,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a6d1345ff4b94b9eca1daac256866c8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10034863360, jitterRate=-0.06543052196502686}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:09,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a6d1345ff4b94b9eca1daac256866c8: 2023-07-17 11:15:09,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8., pid=20, masterSystemTime=1689592509867 2023-07-17 11:15:09,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:09,978 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:09,978 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:09,979 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592509978"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592509978"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592509978"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592509978"}]},"ts":"1689592509978"} 2023-07-17 11:15:09,985 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=15 2023-07-17 11:15:09,985 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; OpenRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,39617,1689592505673 in 265 msec 2023-07-17 11:15:09,989 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-17 11:15:09,990 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, ASSIGN in 435 msec 2023-07-17 11:15:09,992 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:09,992 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592509992"}]},"ts":"1689592509992"} 2023-07-17 11:15:09,994 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-17 11:15:09,998 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:10,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 700 msec 2023-07-17 11:15:10,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:10,438 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-17 11:15:10,438 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-17 11:15:10,439 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:10,449 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-17 11:15:10,450 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:10,450 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-17 11:15:10,451 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:10,457 DEBUG [Listener at localhost/45539] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:10,466 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55806, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:10,470 DEBUG [Listener at localhost/45539] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:10,483 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42764, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:10,485 DEBUG [Listener at localhost/45539] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:10,493 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37246, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:10,496 DEBUG [Listener at localhost/45539] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:10,512 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34266, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:10,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:10,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:10,531 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:10,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:10,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:10,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 034b6c36a538fbe7eaa2db45406b38cf to RSGroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:10,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:10,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:10,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:10,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:10,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, REOPEN/MOVE 2023-07-17 11:15:10,553 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, REOPEN/MOVE 2023-07-17 11:15:10,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 9955a2a8b9047c05bc8a065e0532382d to RSGroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:10,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:10,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:10,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:10,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:10,555 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:10,555 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592510554"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510554"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510554"}]},"ts":"1689592510554"} 2023-07-17 11:15:10,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, REOPEN/MOVE 2023-07-17 11:15:10,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 7a6d1345ff4b94b9eca1daac256866c8 to RSGroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,556 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, REOPEN/MOVE 2023-07-17 11:15:10,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:10,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:10,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:10,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:10,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:10,558 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:10,559 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510558"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510558"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510558"}]},"ts":"1689592510558"} 2023-07-17 11:15:10,559 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:10,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, REOPEN/MOVE 2023-07-17 11:15:10,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 2df67c90d80110e60e7f85f3c2b88fff to RSGroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,564 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, REOPEN/MOVE 2023-07-17 11:15:10,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:10,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:10,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:10,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:10,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:10,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; CloseRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:10,565 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:10,565 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510565"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510565"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510565"}]},"ts":"1689592510565"} 2023-07-17 11:15:10,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, REOPEN/MOVE 2023-07-17 11:15:10,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region c90225c53eac8fd8778ee6386583fc74 to RSGroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:10,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:10,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:10,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:10,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:10,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:10,569 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; CloseRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:10,569 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, REOPEN/MOVE 2023-07-17 11:15:10,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, REOPEN/MOVE 2023-07-17 11:15:10,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_465521657, current retry=0 2023-07-17 11:15:10,573 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, REOPEN/MOVE 2023-07-17 11:15:10,574 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:10,574 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510574"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510574"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510574"}]},"ts":"1689592510574"} 2023-07-17 11:15:10,577 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:10,577 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592510577"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510577"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510577"}]},"ts":"1689592510577"} 2023-07-17 11:15:10,578 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:10,583 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:10,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:10,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:10,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a6d1345ff4b94b9eca1daac256866c8, disabling compactions & flushes 2023-07-17 11:15:10,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 034b6c36a538fbe7eaa2db45406b38cf, disabling compactions & flushes 2023-07-17 11:15:10,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:10,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:10,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:10,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:10,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. after waiting 0 ms 2023-07-17 11:15:10,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. after waiting 0 ms 2023-07-17 11:15:10,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:10,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:10,755 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:10,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:10,757 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:10,757 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 034b6c36a538fbe7eaa2db45406b38cf: 2023-07-17 11:15:10,757 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 034b6c36a538fbe7eaa2db45406b38cf move to jenkins-hbase4.apache.org,37409,1689592505527 record at close sequenceid=2 2023-07-17 11:15:10,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:10,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a6d1345ff4b94b9eca1daac256866c8: 2023-07-17 11:15:10,758 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7a6d1345ff4b94b9eca1daac256866c8 move to jenkins-hbase4.apache.org,35719,1689592509057 record at close sequenceid=2 2023-07-17 11:15:10,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:10,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:10,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c90225c53eac8fd8778ee6386583fc74, disabling compactions & flushes 2023-07-17 11:15:10,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:10,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:10,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. after waiting 0 ms 2023-07-17 11:15:10,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:10,762 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=CLOSED 2023-07-17 11:15:10,762 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592510762"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592510762"}]},"ts":"1689592510762"} 2023-07-17 11:15:10,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:10,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:10,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2df67c90d80110e60e7f85f3c2b88fff, disabling compactions & flushes 2023-07-17 11:15:10,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:10,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:10,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. after waiting 0 ms 2023-07-17 11:15:10,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:10,768 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=CLOSED 2023-07-17 11:15:10,768 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510768"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592510768"}]},"ts":"1689592510768"} 2023-07-17 11:15:10,779 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-17 11:15:10,779 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-17 11:15:10,779 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,40489,1689592505619 in 213 msec 2023-07-17 11:15:10,779 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; CloseRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,39617,1689592505673 in 206 msec 2023-07-17 11:15:10,781 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37409,1689592505527; forceNewPlan=false, retain=false 2023-07-17 11:15:10,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:10,791 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:10,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:10,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:10,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2df67c90d80110e60e7f85f3c2b88fff: 2023-07-17 11:15:10,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2df67c90d80110e60e7f85f3c2b88fff move to jenkins-hbase4.apache.org,37409,1689592505527 record at close sequenceid=2 2023-07-17 11:15:10,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:10,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c90225c53eac8fd8778ee6386583fc74: 2023-07-17 11:15:10,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c90225c53eac8fd8778ee6386583fc74 move to jenkins-hbase4.apache.org,35719,1689592509057 record at close sequenceid=2 2023-07-17 11:15:10,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:10,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:10,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9955a2a8b9047c05bc8a065e0532382d, disabling compactions & flushes 2023-07-17 11:15:10,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:10,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:10,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. after waiting 0 ms 2023-07-17 11:15:10,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:10,800 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=CLOSED 2023-07-17 11:15:10,800 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510799"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592510799"}]},"ts":"1689592510799"} 2023-07-17 11:15:10,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:10,802 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=CLOSED 2023-07-17 11:15:10,802 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592510802"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592510802"}]},"ts":"1689592510802"} 2023-07-17 11:15:10,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-17 11:15:10,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,39617,1689592505673 in 227 msec 2023-07-17 11:15:10,815 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-17 11:15:10,815 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,40489,1689592505619 in 224 msec 2023-07-17 11:15:10,816 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37409,1689592505527; forceNewPlan=false, retain=false 2023-07-17 11:15:10,817 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:10,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:10,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:10,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9955a2a8b9047c05bc8a065e0532382d: 2023-07-17 11:15:10,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9955a2a8b9047c05bc8a065e0532382d move to jenkins-hbase4.apache.org,35719,1689592509057 record at close sequenceid=2 2023-07-17 11:15:10,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:10,827 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=CLOSED 2023-07-17 11:15:10,827 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592510827"}]},"ts":"1689592510827"} 2023-07-17 11:15:10,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-17 11:15:10,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; CloseRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,39617,1689592505673 in 267 msec 2023-07-17 11:15:10,837 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:10,931 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 11:15:10,932 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:10,932 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:10,932 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:10,932 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510932"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510932"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510932"}]},"ts":"1689592510932"} 2023-07-17 11:15:10,932 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592510932"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510932"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510932"}]},"ts":"1689592510932"} 2023-07-17 11:15:10,932 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592510932"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510932"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510932"}]},"ts":"1689592510932"} 2023-07-17 11:15:10,933 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:10,932 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:10,933 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510932"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510932"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510932"}]},"ts":"1689592510932"} 2023-07-17 11:15:10,933 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592510932"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592510932"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592510932"}]},"ts":"1689592510932"} 2023-07-17 11:15:10,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=27, state=RUNNABLE; OpenRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:10,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=29, state=RUNNABLE; OpenRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:10,940 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=23, state=RUNNABLE; OpenRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:10,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=24, state=RUNNABLE; OpenRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:10,943 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=25, state=RUNNABLE; OpenRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:11,089 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:11,089 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:11,092 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:11,093 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:11,093 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:11,094 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55820, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:11,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:11,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a6d1345ff4b94b9eca1daac256866c8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 11:15:11,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:11,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:11,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 034b6c36a538fbe7eaa2db45406b38cf, NAME => 'Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 11:15:11,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:11,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,111 INFO [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,114 INFO [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,115 DEBUG [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/f 2023-07-17 11:15:11,115 DEBUG [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/f 2023-07-17 11:15:11,115 INFO [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a6d1345ff4b94b9eca1daac256866c8 columnFamilyName f 2023-07-17 11:15:11,116 DEBUG [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/f 2023-07-17 11:15:11,116 DEBUG [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/f 2023-07-17 11:15:11,116 INFO [StoreOpener-7a6d1345ff4b94b9eca1daac256866c8-1] regionserver.HStore(310): Store=7a6d1345ff4b94b9eca1daac256866c8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:11,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,119 INFO [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 034b6c36a538fbe7eaa2db45406b38cf columnFamilyName f 2023-07-17 11:15:11,120 INFO [StoreOpener-034b6c36a538fbe7eaa2db45406b38cf-1] regionserver.HStore(310): Store=034b6c36a538fbe7eaa2db45406b38cf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:11,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,130 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a6d1345ff4b94b9eca1daac256866c8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10920887360, jitterRate=0.017086893320083618}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:11,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a6d1345ff4b94b9eca1daac256866c8: 2023-07-17 11:15:11,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8., pid=37, masterSystemTime=1689592511092 2023-07-17 11:15:11,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 034b6c36a538fbe7eaa2db45406b38cf; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11102006720, jitterRate=0.03395494818687439}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:11,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 034b6c36a538fbe7eaa2db45406b38cf: 2023-07-17 11:15:11,139 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:11,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf., pid=35, masterSystemTime=1689592511088 2023-07-17 11:15:11,139 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:11,140 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:11,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c90225c53eac8fd8778ee6386583fc74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 11:15:11,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,142 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:11,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:11,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,142 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511142"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592511142"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592511142"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592511142"}]},"ts":"1689592511142"} 2023-07-17 11:15:11,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,145 INFO [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,149 DEBUG [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/f 2023-07-17 11:15:11,149 DEBUG [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/f 2023-07-17 11:15:11,149 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:11,150 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592511149"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592511149"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592511149"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592511149"}]},"ts":"1689592511149"} 2023-07-17 11:15:11,150 INFO [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c90225c53eac8fd8778ee6386583fc74 columnFamilyName f 2023-07-17 11:15:11,152 INFO [StoreOpener-c90225c53eac8fd8778ee6386583fc74-1] regionserver.HStore(310): Store=c90225c53eac8fd8778ee6386583fc74/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:11,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:11,155 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=25 2023-07-17 11:15:11,155 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=25, state=SUCCESS; OpenRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,35719,1689592509057 in 205 msec 2023-07-17 11:15:11,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:11,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:11,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2df67c90d80110e60e7f85f3c2b88fff, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 11:15:11,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:11,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,164 INFO [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,167 DEBUG [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/f 2023-07-17 11:15:11,167 DEBUG [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/f 2023-07-17 11:15:11,167 INFO [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2df67c90d80110e60e7f85f3c2b88fff columnFamilyName f 2023-07-17 11:15:11,169 INFO [StoreOpener-2df67c90d80110e60e7f85f3c2b88fff-1] regionserver.HStore(310): Store=2df67c90d80110e60e7f85f3c2b88fff/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:11,169 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, REOPEN/MOVE in 598 msec 2023-07-17 11:15:11,174 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=23 2023-07-17 11:15:11,175 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=23, state=SUCCESS; OpenRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,37409,1689592505527 in 214 msec 2023-07-17 11:15:11,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,177 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, REOPEN/MOVE in 623 msec 2023-07-17 11:15:11,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c90225c53eac8fd8778ee6386583fc74; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11418982560, jitterRate=0.06347562372684479}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:11,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c90225c53eac8fd8778ee6386583fc74: 2023-07-17 11:15:11,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74., pid=34, masterSystemTime=1689592511092 2023-07-17 11:15:11,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:11,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:11,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:11,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9955a2a8b9047c05bc8a065e0532382d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 11:15:11,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:11,182 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:11,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,182 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592511182"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592511182"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592511182"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592511182"}]},"ts":"1689592511182"} 2023-07-17 11:15:11,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,183 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2df67c90d80110e60e7f85f3c2b88fff; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9453392960, jitterRate=-0.11958417296409607}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:11,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2df67c90d80110e60e7f85f3c2b88fff: 2023-07-17 11:15:11,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff., pid=33, masterSystemTime=1689592511088 2023-07-17 11:15:11,187 INFO [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,189 DEBUG [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/f 2023-07-17 11:15:11,189 DEBUG [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/f 2023-07-17 11:15:11,190 INFO [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9955a2a8b9047c05bc8a065e0532382d columnFamilyName f 2023-07-17 11:15:11,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:11,190 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:11,191 INFO [StoreOpener-9955a2a8b9047c05bc8a065e0532382d-1] regionserver.HStore(310): Store=9955a2a8b9047c05bc8a065e0532382d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:11,191 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:11,192 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511191"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592511191"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592511191"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592511191"}]},"ts":"1689592511191"} 2023-07-17 11:15:11,194 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=29 2023-07-17 11:15:11,194 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=29, state=SUCCESS; OpenRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,35719,1689592509057 in 250 msec 2023-07-17 11:15:11,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,201 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, REOPEN/MOVE in 626 msec 2023-07-17 11:15:11,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,204 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=27 2023-07-17 11:15:11,205 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=27, state=SUCCESS; OpenRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,37409,1689592505527 in 265 msec 2023-07-17 11:15:11,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, REOPEN/MOVE in 641 msec 2023-07-17 11:15:11,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,211 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9955a2a8b9047c05bc8a065e0532382d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11224987040, jitterRate=0.04540838301181793}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:11,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9955a2a8b9047c05bc8a065e0532382d: 2023-07-17 11:15:11,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d., pid=36, masterSystemTime=1689592511092 2023-07-17 11:15:11,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:11,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:11,215 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:11,215 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511215"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592511215"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592511215"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592511215"}]},"ts":"1689592511215"} 2023-07-17 11:15:11,221 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=24 2023-07-17 11:15:11,221 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=24, state=SUCCESS; OpenRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,35719,1689592509057 in 275 msec 2023-07-17 11:15:11,223 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, REOPEN/MOVE in 667 msec 2023-07-17 11:15:11,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-17 11:15:11,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_465521657. 2023-07-17 11:15:11,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:11,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:11,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:11,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:11,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:11,589 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:11,599 INFO [Listener at localhost/45539] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:11,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:11,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:11,619 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592511619"}]},"ts":"1689592511619"} 2023-07-17 11:15:11,621 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-17 11:15:11,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-17 11:15:11,623 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-17 11:15:11,625 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, UNASSIGN}] 2023-07-17 11:15:11,627 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, UNASSIGN 2023-07-17 11:15:11,628 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, UNASSIGN 2023-07-17 11:15:11,628 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, UNASSIGN 2023-07-17 11:15:11,628 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, UNASSIGN 2023-07-17 11:15:11,628 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, UNASSIGN 2023-07-17 11:15:11,629 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:11,629 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511629"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592511629"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592511629"}]},"ts":"1689592511629"} 2023-07-17 11:15:11,631 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:11,631 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:11,631 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592511631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592511631"}]},"ts":"1689592511631"} 2023-07-17 11:15:11,631 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592511631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592511631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592511631"}]},"ts":"1689592511631"} 2023-07-17 11:15:11,632 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:11,632 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:11,632 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592511631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592511631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592511631"}]},"ts":"1689592511631"} 2023-07-17 11:15:11,632 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511632"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592511632"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592511632"}]},"ts":"1689592511632"} 2023-07-17 11:15:11,634 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=41, state=RUNNABLE; CloseRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:11,637 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=40, state=RUNNABLE; CloseRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:11,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=39, state=RUNNABLE; CloseRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:11,640 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=43, state=RUNNABLE; CloseRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:11,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=42, state=RUNNABLE; CloseRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:11,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-17 11:15:11,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9955a2a8b9047c05bc8a065e0532382d, disabling compactions & flushes 2023-07-17 11:15:11,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:11,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:11,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. after waiting 0 ms 2023-07-17 11:15:11,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:11,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 034b6c36a538fbe7eaa2db45406b38cf, disabling compactions & flushes 2023-07-17 11:15:11,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:11,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:11,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. after waiting 0 ms 2023-07-17 11:15:11,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:11,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:11,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d. 2023-07-17 11:15:11,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9955a2a8b9047c05bc8a065e0532382d: 2023-07-17 11:15:11,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a6d1345ff4b94b9eca1daac256866c8, disabling compactions & flushes 2023-07-17 11:15:11,813 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:11,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:11,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. after waiting 0 ms 2023-07-17 11:15:11,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:11,815 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=9955a2a8b9047c05bc8a065e0532382d, regionState=CLOSED 2023-07-17 11:15:11,815 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511815"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592511815"}]},"ts":"1689592511815"} 2023-07-17 11:15:11,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:11,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf. 2023-07-17 11:15:11,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 034b6c36a538fbe7eaa2db45406b38cf: 2023-07-17 11:15:11,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2df67c90d80110e60e7f85f3c2b88fff, disabling compactions & flushes 2023-07-17 11:15:11,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:11,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:11,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. after waiting 0 ms 2023-07-17 11:15:11,821 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=034b6c36a538fbe7eaa2db45406b38cf, regionState=CLOSED 2023-07-17 11:15:11,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:11,822 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592511821"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592511821"}]},"ts":"1689592511821"} 2023-07-17 11:15:11,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:11,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8. 2023-07-17 11:15:11,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a6d1345ff4b94b9eca1daac256866c8: 2023-07-17 11:15:11,828 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=40 2023-07-17 11:15:11,828 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=40, state=SUCCESS; CloseRegionProcedure 9955a2a8b9047c05bc8a065e0532382d, server=jenkins-hbase4.apache.org,35719,1689592509057 in 183 msec 2023-07-17 11:15:11,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c90225c53eac8fd8778ee6386583fc74, disabling compactions & flushes 2023-07-17 11:15:11,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:11,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:11,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. after waiting 0 ms 2023-07-17 11:15:11,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:11,834 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9955a2a8b9047c05bc8a065e0532382d, UNASSIGN in 203 msec 2023-07-17 11:15:11,834 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=7a6d1345ff4b94b9eca1daac256866c8, regionState=CLOSED 2023-07-17 11:15:11,834 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592511834"}]},"ts":"1689592511834"} 2023-07-17 11:15:11,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:11,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=39 2023-07-17 11:15:11,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=39, state=SUCCESS; CloseRegionProcedure 034b6c36a538fbe7eaa2db45406b38cf, server=jenkins-hbase4.apache.org,37409,1689592505527 in 188 msec 2023-07-17 11:15:11,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff. 2023-07-17 11:15:11,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2df67c90d80110e60e7f85f3c2b88fff: 2023-07-17 11:15:11,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=034b6c36a538fbe7eaa2db45406b38cf, UNASSIGN in 211 msec 2023-07-17 11:15:11,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:11,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74. 2023-07-17 11:15:11,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c90225c53eac8fd8778ee6386583fc74: 2023-07-17 11:15:11,843 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=2df67c90d80110e60e7f85f3c2b88fff, regionState=CLOSED 2023-07-17 11:15:11,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=41 2023-07-17 11:15:11,843 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592511843"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592511843"}]},"ts":"1689592511843"} 2023-07-17 11:15:11,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; CloseRegionProcedure 7a6d1345ff4b94b9eca1daac256866c8, server=jenkins-hbase4.apache.org,35719,1689592509057 in 203 msec 2023-07-17 11:15:11,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,844 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=c90225c53eac8fd8778ee6386583fc74, regionState=CLOSED 2023-07-17 11:15:11,844 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592511844"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592511844"}]},"ts":"1689592511844"} 2023-07-17 11:15:11,845 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7a6d1345ff4b94b9eca1daac256866c8, UNASSIGN in 218 msec 2023-07-17 11:15:11,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=42 2023-07-17 11:15:11,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=42, state=SUCCESS; CloseRegionProcedure 2df67c90d80110e60e7f85f3c2b88fff, server=jenkins-hbase4.apache.org,37409,1689592505527 in 204 msec 2023-07-17 11:15:11,850 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=43 2023-07-17 11:15:11,850 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=43, state=SUCCESS; CloseRegionProcedure c90225c53eac8fd8778ee6386583fc74, server=jenkins-hbase4.apache.org,35719,1689592509057 in 207 msec 2023-07-17 11:15:11,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2df67c90d80110e60e7f85f3c2b88fff, UNASSIGN in 223 msec 2023-07-17 11:15:11,853 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=38 2023-07-17 11:15:11,853 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c90225c53eac8fd8778ee6386583fc74, UNASSIGN in 225 msec 2023-07-17 11:15:11,854 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592511854"}]},"ts":"1689592511854"} 2023-07-17 11:15:11,855 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-17 11:15:11,857 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-17 11:15:11,860 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 252 msec 2023-07-17 11:15:11,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-17 11:15:11,926 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-17 11:15:11,928 INFO [Listener at localhost/45539] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:11,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:11,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-17 11:15:11,945 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-17 11:15:11,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 11:15:11,958 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,958 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,958 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,958 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,958 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,963 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/recovered.edits] 2023-07-17 11:15:11,963 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/recovered.edits] 2023-07-17 11:15:11,963 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/recovered.edits] 2023-07-17 11:15:11,963 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/recovered.edits] 2023-07-17 11:15:11,963 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/recovered.edits] 2023-07-17 11:15:11,981 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/recovered.edits/7.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d/recovered.edits/7.seqid 2023-07-17 11:15:11,981 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/recovered.edits/7.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff/recovered.edits/7.seqid 2023-07-17 11:15:11,982 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/recovered.edits/7.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8/recovered.edits/7.seqid 2023-07-17 11:15:11,982 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9955a2a8b9047c05bc8a065e0532382d 2023-07-17 11:15:11,982 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/recovered.edits/7.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74/recovered.edits/7.seqid 2023-07-17 11:15:11,983 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2df67c90d80110e60e7f85f3c2b88fff 2023-07-17 11:15:11,983 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c90225c53eac8fd8778ee6386583fc74 2023-07-17 11:15:11,985 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7a6d1345ff4b94b9eca1daac256866c8 2023-07-17 11:15:11,986 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/recovered.edits/7.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf/recovered.edits/7.seqid 2023-07-17 11:15:11,987 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/034b6c36a538fbe7eaa2db45406b38cf 2023-07-17 11:15:11,987 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 11:15:12,018 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-17 11:15:12,022 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-17 11:15:12,023 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-17 11:15:12,023 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592512023"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:12,024 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592512023"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:12,024 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592512023"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:12,024 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592512023"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:12,024 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592512023"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:12,027 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-17 11:15:12,027 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 034b6c36a538fbe7eaa2db45406b38cf, NAME => 'Group_testTableMoveTruncateAndDrop,,1689592509292.034b6c36a538fbe7eaa2db45406b38cf.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 9955a2a8b9047c05bc8a065e0532382d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689592509292.9955a2a8b9047c05bc8a065e0532382d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 7a6d1345ff4b94b9eca1daac256866c8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592509292.7a6d1345ff4b94b9eca1daac256866c8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 2df67c90d80110e60e7f85f3c2b88fff, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592509292.2df67c90d80110e60e7f85f3c2b88fff.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => c90225c53eac8fd8778ee6386583fc74, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689592509292.c90225c53eac8fd8778ee6386583fc74.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-17 11:15:12,028 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-17 11:15:12,028 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592512028"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:12,031 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-17 11:15:12,039 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,039 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,039 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,039 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,039 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,040 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c empty. 2023-07-17 11:15:12,040 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f empty. 2023-07-17 11:15:12,041 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e empty. 2023-07-17 11:15:12,041 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7 empty. 2023-07-17 11:15:12,041 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5 empty. 2023-07-17 11:15:12,041 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,041 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,041 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,042 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,042 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,042 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 11:15:12,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 11:15:12,083 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:12,086 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e4b5584cf07074ba1e940bd2ffe8188c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:12,086 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => f3e7fd65b508e0a1f4e57bcbe5c4303e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:12,086 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 877b35c308fd8fdc33f99eb3e52a4eb5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:12,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e4b5584cf07074ba1e940bd2ffe8188c, disabling compactions & flushes 2023-07-17 11:15:12,164 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:12,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:12,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. after waiting 0 ms 2023-07-17 11:15:12,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:12,164 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:12,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e4b5584cf07074ba1e940bd2ffe8188c: 2023-07-17 11:15:12,165 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e7cdbf5f7db60ca1f8bd006676abb4f7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:12,169 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,169 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 877b35c308fd8fdc33f99eb3e52a4eb5, disabling compactions & flushes 2023-07-17 11:15:12,169 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:12,169 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:12,169 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. after waiting 0 ms 2023-07-17 11:15:12,169 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:12,169 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:12,169 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 877b35c308fd8fdc33f99eb3e52a4eb5: 2023-07-17 11:15:12,170 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d4faed819126368bedc0b694e57bf52f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:12,177 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,177 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing f3e7fd65b508e0a1f4e57bcbe5c4303e, disabling compactions & flushes 2023-07-17 11:15:12,177 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:12,177 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:12,177 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. after waiting 0 ms 2023-07-17 11:15:12,177 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:12,177 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:12,177 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for f3e7fd65b508e0a1f4e57bcbe5c4303e: 2023-07-17 11:15:12,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d4faed819126368bedc0b694e57bf52f, disabling compactions & flushes 2023-07-17 11:15:12,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:12,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:12,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. after waiting 0 ms 2023-07-17 11:15:12,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:12,218 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:12,218 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d4faed819126368bedc0b694e57bf52f: 2023-07-17 11:15:12,219 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,219 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e7cdbf5f7db60ca1f8bd006676abb4f7, disabling compactions & flushes 2023-07-17 11:15:12,219 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:12,219 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:12,219 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. after waiting 0 ms 2023-07-17 11:15:12,219 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:12,219 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:12,219 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e7cdbf5f7db60ca1f8bd006676abb4f7: 2023-07-17 11:15:12,224 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592512224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592512224"}]},"ts":"1689592512224"} 2023-07-17 11:15:12,225 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592512224"}]},"ts":"1689592512224"} 2023-07-17 11:15:12,225 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592512224"}]},"ts":"1689592512224"} 2023-07-17 11:15:12,225 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592512224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592512224"}]},"ts":"1689592512224"} 2023-07-17 11:15:12,225 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512224"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592512224"}]},"ts":"1689592512224"} 2023-07-17 11:15:12,229 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-17 11:15:12,230 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592512230"}]},"ts":"1689592512230"} 2023-07-17 11:15:12,233 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-17 11:15:12,238 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:12,238 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:12,238 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:12,238 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:12,242 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e4b5584cf07074ba1e940bd2ffe8188c, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e7fd65b508e0a1f4e57bcbe5c4303e, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=877b35c308fd8fdc33f99eb3e52a4eb5, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7cdbf5f7db60ca1f8bd006676abb4f7, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d4faed819126368bedc0b694e57bf52f, ASSIGN}] 2023-07-17 11:15:12,245 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e4b5584cf07074ba1e940bd2ffe8188c, ASSIGN 2023-07-17 11:15:12,247 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e7fd65b508e0a1f4e57bcbe5c4303e, ASSIGN 2023-07-17 11:15:12,247 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e4b5584cf07074ba1e940bd2ffe8188c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:12,247 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=877b35c308fd8fdc33f99eb3e52a4eb5, ASSIGN 2023-07-17 11:15:12,247 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7cdbf5f7db60ca1f8bd006676abb4f7, ASSIGN 2023-07-17 11:15:12,248 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d4faed819126368bedc0b694e57bf52f, ASSIGN 2023-07-17 11:15:12,248 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e7fd65b508e0a1f4e57bcbe5c4303e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:12,249 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7cdbf5f7db60ca1f8bd006676abb4f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:12,249 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=877b35c308fd8fdc33f99eb3e52a4eb5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37409,1689592505527; forceNewPlan=false, retain=false 2023-07-17 11:15:12,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 11:15:12,252 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d4faed819126368bedc0b694e57bf52f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37409,1689592505527; forceNewPlan=false, retain=false 2023-07-17 11:15:12,397 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 11:15:12,401 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=f3e7fd65b508e0a1f4e57bcbe5c4303e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:12,401 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=877b35c308fd8fdc33f99eb3e52a4eb5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:12,401 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e4b5584cf07074ba1e940bd2ffe8188c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:12,401 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=e7cdbf5f7db60ca1f8bd006676abb4f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:12,401 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592512401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592512401"}]},"ts":"1689592512401"} 2023-07-17 11:15:12,401 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592512401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592512401"}]},"ts":"1689592512401"} 2023-07-17 11:15:12,401 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=d4faed819126368bedc0b694e57bf52f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:12,401 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592512401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592512401"}]},"ts":"1689592512401"} 2023-07-17 11:15:12,401 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592512401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592512401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592512401"}]},"ts":"1689592512401"} 2023-07-17 11:15:12,401 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592512401"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592512401"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592512401"}]},"ts":"1689592512401"} 2023-07-17 11:15:12,404 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=52, state=RUNNABLE; OpenRegionProcedure 877b35c308fd8fdc33f99eb3e52a4eb5, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:12,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=51, state=RUNNABLE; OpenRegionProcedure f3e7fd65b508e0a1f4e57bcbe5c4303e, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:12,411 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=53, state=RUNNABLE; OpenRegionProcedure e7cdbf5f7db60ca1f8bd006676abb4f7, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:12,413 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=54, state=RUNNABLE; OpenRegionProcedure d4faed819126368bedc0b694e57bf52f, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:12,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=50, state=RUNNABLE; OpenRegionProcedure e4b5584cf07074ba1e940bd2ffe8188c, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:12,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 11:15:12,561 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:12,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 877b35c308fd8fdc33f99eb3e52a4eb5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 11:15:12,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:12,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e7cdbf5f7db60ca1f8bd006676abb4f7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 11:15:12,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,564 INFO [StoreOpener-877b35c308fd8fdc33f99eb3e52a4eb5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,564 INFO [StoreOpener-e7cdbf5f7db60ca1f8bd006676abb4f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,566 DEBUG [StoreOpener-877b35c308fd8fdc33f99eb3e52a4eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/f 2023-07-17 11:15:12,566 DEBUG [StoreOpener-877b35c308fd8fdc33f99eb3e52a4eb5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/f 2023-07-17 11:15:12,566 INFO [StoreOpener-877b35c308fd8fdc33f99eb3e52a4eb5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 877b35c308fd8fdc33f99eb3e52a4eb5 columnFamilyName f 2023-07-17 11:15:12,567 DEBUG [StoreOpener-e7cdbf5f7db60ca1f8bd006676abb4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/f 2023-07-17 11:15:12,567 DEBUG [StoreOpener-e7cdbf5f7db60ca1f8bd006676abb4f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/f 2023-07-17 11:15:12,567 INFO [StoreOpener-877b35c308fd8fdc33f99eb3e52a4eb5-1] regionserver.HStore(310): Store=877b35c308fd8fdc33f99eb3e52a4eb5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:12,567 INFO [StoreOpener-e7cdbf5f7db60ca1f8bd006676abb4f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e7cdbf5f7db60ca1f8bd006676abb4f7 columnFamilyName f 2023-07-17 11:15:12,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,568 INFO [StoreOpener-e7cdbf5f7db60ca1f8bd006676abb4f7-1] regionserver.HStore(310): Store=e7cdbf5f7db60ca1f8bd006676abb4f7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:12,569 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:12,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:12,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:12,581 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 877b35c308fd8fdc33f99eb3e52a4eb5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9614122880, jitterRate=-0.10461503267288208}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:12,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 877b35c308fd8fdc33f99eb3e52a4eb5: 2023-07-17 11:15:12,583 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5., pid=55, masterSystemTime=1689592512556 2023-07-17 11:15:12,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:12,584 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e7cdbf5f7db60ca1f8bd006676abb4f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11333004640, jitterRate=0.05546830594539642}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:12,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e7cdbf5f7db60ca1f8bd006676abb4f7: 2023-07-17 11:15:12,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7., pid=57, masterSystemTime=1689592512557 2023-07-17 11:15:12,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:12,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:12,591 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:12,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d4faed819126368bedc0b694e57bf52f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 11:15:12,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:12,591 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=877b35c308fd8fdc33f99eb3e52a4eb5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:12,591 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:12,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:12,592 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512591"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592512591"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592512591"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592512591"}]},"ts":"1689592512591"} 2023-07-17 11:15:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f3e7fd65b508e0a1f4e57bcbe5c4303e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 11:15:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,593 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=e7cdbf5f7db60ca1f8bd006676abb4f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:12,593 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512592"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592512592"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592512592"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592512592"}]},"ts":"1689592512592"} 2023-07-17 11:15:12,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=52 2023-07-17 11:15:12,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; OpenRegionProcedure 877b35c308fd8fdc33f99eb3e52a4eb5, server=jenkins-hbase4.apache.org,37409,1689592505527 in 190 msec 2023-07-17 11:15:12,599 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=53 2023-07-17 11:15:12,599 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=53, state=SUCCESS; OpenRegionProcedure e7cdbf5f7db60ca1f8bd006676abb4f7, server=jenkins-hbase4.apache.org,35719,1689592509057 in 184 msec 2023-07-17 11:15:12,600 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=877b35c308fd8fdc33f99eb3e52a4eb5, ASSIGN in 355 msec 2023-07-17 11:15:12,601 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7cdbf5f7db60ca1f8bd006676abb4f7, ASSIGN in 357 msec 2023-07-17 11:15:12,602 INFO [StoreOpener-f3e7fd65b508e0a1f4e57bcbe5c4303e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,606 DEBUG [StoreOpener-f3e7fd65b508e0a1f4e57bcbe5c4303e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/f 2023-07-17 11:15:12,606 DEBUG [StoreOpener-f3e7fd65b508e0a1f4e57bcbe5c4303e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/f 2023-07-17 11:15:12,607 INFO [StoreOpener-d4faed819126368bedc0b694e57bf52f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,607 INFO [StoreOpener-f3e7fd65b508e0a1f4e57bcbe5c4303e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f3e7fd65b508e0a1f4e57bcbe5c4303e columnFamilyName f 2023-07-17 11:15:12,608 INFO [StoreOpener-f3e7fd65b508e0a1f4e57bcbe5c4303e-1] regionserver.HStore(310): Store=f3e7fd65b508e0a1f4e57bcbe5c4303e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:12,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,609 DEBUG [StoreOpener-d4faed819126368bedc0b694e57bf52f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/f 2023-07-17 11:15:12,609 DEBUG [StoreOpener-d4faed819126368bedc0b694e57bf52f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/f 2023-07-17 11:15:12,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,610 INFO [StoreOpener-d4faed819126368bedc0b694e57bf52f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d4faed819126368bedc0b694e57bf52f columnFamilyName f 2023-07-17 11:15:12,610 INFO [StoreOpener-d4faed819126368bedc0b694e57bf52f-1] regionserver.HStore(310): Store=d4faed819126368bedc0b694e57bf52f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:12,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,614 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:12,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:12,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:12,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:12,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f3e7fd65b508e0a1f4e57bcbe5c4303e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10336960160, jitterRate=-0.037295565009117126}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:12,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f3e7fd65b508e0a1f4e57bcbe5c4303e: 2023-07-17 11:15:12,624 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d4faed819126368bedc0b694e57bf52f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11774635200, jitterRate=0.09659835696220398}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:12,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d4faed819126368bedc0b694e57bf52f: 2023-07-17 11:15:12,624 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e., pid=56, masterSystemTime=1689592512557 2023-07-17 11:15:12,625 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f., pid=58, masterSystemTime=1689592512556 2023-07-17 11:15:12,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:12,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:12,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:12,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4b5584cf07074ba1e940bd2ffe8188c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 11:15:12,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:12,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,630 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=f3e7fd65b508e0a1f4e57bcbe5c4303e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:12,630 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592512629"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592512629"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592512629"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592512629"}]},"ts":"1689592512629"} 2023-07-17 11:15:12,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:12,633 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:12,634 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=d4faed819126368bedc0b694e57bf52f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:12,634 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592512634"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592512634"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592512634"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592512634"}]},"ts":"1689592512634"} 2023-07-17 11:15:12,647 INFO [StoreOpener-e4b5584cf07074ba1e940bd2ffe8188c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=51 2023-07-17 11:15:12,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=51, state=SUCCESS; OpenRegionProcedure f3e7fd65b508e0a1f4e57bcbe5c4303e, server=jenkins-hbase4.apache.org,35719,1689592509057 in 238 msec 2023-07-17 11:15:12,648 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=54 2023-07-17 11:15:12,648 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=54, state=SUCCESS; OpenRegionProcedure d4faed819126368bedc0b694e57bf52f, server=jenkins-hbase4.apache.org,37409,1689592505527 in 232 msec 2023-07-17 11:15:12,649 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e7fd65b508e0a1f4e57bcbe5c4303e, ASSIGN in 408 msec 2023-07-17 11:15:12,649 DEBUG [StoreOpener-e4b5584cf07074ba1e940bd2ffe8188c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/f 2023-07-17 11:15:12,649 DEBUG [StoreOpener-e4b5584cf07074ba1e940bd2ffe8188c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/f 2023-07-17 11:15:12,650 INFO [StoreOpener-e4b5584cf07074ba1e940bd2ffe8188c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4b5584cf07074ba1e940bd2ffe8188c columnFamilyName f 2023-07-17 11:15:12,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d4faed819126368bedc0b694e57bf52f, ASSIGN in 406 msec 2023-07-17 11:15:12,651 INFO [StoreOpener-e4b5584cf07074ba1e940bd2ffe8188c-1] regionserver.HStore(310): Store=e4b5584cf07074ba1e940bd2ffe8188c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:12,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:12,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:12,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4b5584cf07074ba1e940bd2ffe8188c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11207593920, jitterRate=0.043788522481918335}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:12,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4b5584cf07074ba1e940bd2ffe8188c: 2023-07-17 11:15:12,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c., pid=59, masterSystemTime=1689592512557 2023-07-17 11:15:12,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:12,664 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:12,665 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e4b5584cf07074ba1e940bd2ffe8188c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:12,665 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592512665"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592512665"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592512665"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592512665"}]},"ts":"1689592512665"} 2023-07-17 11:15:12,670 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=50 2023-07-17 11:15:12,670 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=50, state=SUCCESS; OpenRegionProcedure e4b5584cf07074ba1e940bd2ffe8188c, server=jenkins-hbase4.apache.org,35719,1689592509057 in 249 msec 2023-07-17 11:15:12,675 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=49 2023-07-17 11:15:12,675 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e4b5584cf07074ba1e940bd2ffe8188c, ASSIGN in 432 msec 2023-07-17 11:15:12,675 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592512675"}]},"ts":"1689592512675"} 2023-07-17 11:15:12,677 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-17 11:15:12,680 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-17 11:15:12,682 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 744 msec 2023-07-17 11:15:13,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-17 11:15:13,054 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-17 11:15:13,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:13,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:13,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,059 INFO [Listener at localhost/45539] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-17 11:15:13,072 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592513072"}]},"ts":"1689592513072"} 2023-07-17 11:15:13,074 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-17 11:15:13,076 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-17 11:15:13,077 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e4b5584cf07074ba1e940bd2ffe8188c, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e7fd65b508e0a1f4e57bcbe5c4303e, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=877b35c308fd8fdc33f99eb3e52a4eb5, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7cdbf5f7db60ca1f8bd006676abb4f7, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d4faed819126368bedc0b694e57bf52f, UNASSIGN}] 2023-07-17 11:15:13,087 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7cdbf5f7db60ca1f8bd006676abb4f7, UNASSIGN 2023-07-17 11:15:13,087 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=877b35c308fd8fdc33f99eb3e52a4eb5, UNASSIGN 2023-07-17 11:15:13,087 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d4faed819126368bedc0b694e57bf52f, UNASSIGN 2023-07-17 11:15:13,088 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e7fd65b508e0a1f4e57bcbe5c4303e, UNASSIGN 2023-07-17 11:15:13,088 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e4b5584cf07074ba1e940bd2ffe8188c, UNASSIGN 2023-07-17 11:15:13,092 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=e7cdbf5f7db60ca1f8bd006676abb4f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:13,092 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592513092"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592513092"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592513092"}]},"ts":"1689592513092"} 2023-07-17 11:15:13,092 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=e4b5584cf07074ba1e940bd2ffe8188c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:13,093 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592513092"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592513092"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592513092"}]},"ts":"1689592513092"} 2023-07-17 11:15:13,093 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=f3e7fd65b508e0a1f4e57bcbe5c4303e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:13,093 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592513093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592513093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592513093"}]},"ts":"1689592513093"} 2023-07-17 11:15:13,094 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=d4faed819126368bedc0b694e57bf52f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:13,094 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592513094"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592513094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592513094"}]},"ts":"1689592513094"} 2023-07-17 11:15:13,093 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=877b35c308fd8fdc33f99eb3e52a4eb5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:13,095 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592513092"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592513092"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592513092"}]},"ts":"1689592513092"} 2023-07-17 11:15:13,097 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=64, state=RUNNABLE; CloseRegionProcedure e7cdbf5f7db60ca1f8bd006676abb4f7, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:13,099 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=61, state=RUNNABLE; CloseRegionProcedure e4b5584cf07074ba1e940bd2ffe8188c, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:13,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; CloseRegionProcedure f3e7fd65b508e0a1f4e57bcbe5c4303e, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:13,105 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=65, state=RUNNABLE; CloseRegionProcedure d4faed819126368bedc0b694e57bf52f, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:13,106 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=63, state=RUNNABLE; CloseRegionProcedure 877b35c308fd8fdc33f99eb3e52a4eb5, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:13,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-17 11:15:13,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:13,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4b5584cf07074ba1e940bd2ffe8188c, disabling compactions & flushes 2023-07-17 11:15:13,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:13,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:13,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. after waiting 0 ms 2023-07-17 11:15:13,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:13,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:13,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d4faed819126368bedc0b694e57bf52f, disabling compactions & flushes 2023-07-17 11:15:13,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:13,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:13,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. after waiting 0 ms 2023-07-17 11:15:13,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:13,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:13,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:13,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f. 2023-07-17 11:15:13,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c. 2023-07-17 11:15:13,276 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d4faed819126368bedc0b694e57bf52f: 2023-07-17 11:15:13,276 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4b5584cf07074ba1e940bd2ffe8188c: 2023-07-17 11:15:13,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:13,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:13,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e7cdbf5f7db60ca1f8bd006676abb4f7, disabling compactions & flushes 2023-07-17 11:15:13,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:13,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:13,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. after waiting 0 ms 2023-07-17 11:15:13,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:13,281 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=e4b5584cf07074ba1e940bd2ffe8188c, regionState=CLOSED 2023-07-17 11:15:13,281 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592513281"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592513281"}]},"ts":"1689592513281"} 2023-07-17 11:15:13,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:13,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:13,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 877b35c308fd8fdc33f99eb3e52a4eb5, disabling compactions & flushes 2023-07-17 11:15:13,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:13,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:13,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. after waiting 0 ms 2023-07-17 11:15:13,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:13,286 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=d4faed819126368bedc0b694e57bf52f, regionState=CLOSED 2023-07-17 11:15:13,287 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689592513286"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592513286"}]},"ts":"1689592513286"} 2023-07-17 11:15:13,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:13,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7. 2023-07-17 11:15:13,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e7cdbf5f7db60ca1f8bd006676abb4f7: 2023-07-17 11:15:13,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:13,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5. 2023-07-17 11:15:13,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 877b35c308fd8fdc33f99eb3e52a4eb5: 2023-07-17 11:15:13,299 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=61 2023-07-17 11:15:13,299 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=61, state=SUCCESS; CloseRegionProcedure e4b5584cf07074ba1e940bd2ffe8188c, server=jenkins-hbase4.apache.org,35719,1689592509057 in 188 msec 2023-07-17 11:15:13,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:13,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:13,300 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f3e7fd65b508e0a1f4e57bcbe5c4303e, disabling compactions & flushes 2023-07-17 11:15:13,301 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:13,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:13,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. after waiting 0 ms 2023-07-17 11:15:13,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:13,302 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=e7cdbf5f7db60ca1f8bd006676abb4f7, regionState=CLOSED 2023-07-17 11:15:13,302 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592513301"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592513301"}]},"ts":"1689592513301"} 2023-07-17 11:15:13,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:13,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=65 2023-07-17 11:15:13,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=65, state=SUCCESS; CloseRegionProcedure d4faed819126368bedc0b694e57bf52f, server=jenkins-hbase4.apache.org,37409,1689592505527 in 188 msec 2023-07-17 11:15:13,304 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e4b5584cf07074ba1e940bd2ffe8188c, UNASSIGN in 222 msec 2023-07-17 11:15:13,304 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=877b35c308fd8fdc33f99eb3e52a4eb5, regionState=CLOSED 2023-07-17 11:15:13,304 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592513304"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592513304"}]},"ts":"1689592513304"} 2023-07-17 11:15:13,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=63 2023-07-17 11:15:13,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=63, state=SUCCESS; CloseRegionProcedure 877b35c308fd8fdc33f99eb3e52a4eb5, server=jenkins-hbase4.apache.org,37409,1689592505527 in 201 msec 2023-07-17 11:15:13,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d4faed819126368bedc0b694e57bf52f, UNASSIGN in 226 msec 2023-07-17 11:15:13,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=64 2023-07-17 11:15:13,310 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=64, state=SUCCESS; CloseRegionProcedure e7cdbf5f7db60ca1f8bd006676abb4f7, server=jenkins-hbase4.apache.org,35719,1689592509057 in 207 msec 2023-07-17 11:15:13,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=877b35c308fd8fdc33f99eb3e52a4eb5, UNASSIGN in 231 msec 2023-07-17 11:15:13,315 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:13,315 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7cdbf5f7db60ca1f8bd006676abb4f7, UNASSIGN in 233 msec 2023-07-17 11:15:13,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e. 2023-07-17 11:15:13,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f3e7fd65b508e0a1f4e57bcbe5c4303e: 2023-07-17 11:15:13,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:13,319 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=f3e7fd65b508e0a1f4e57bcbe5c4303e, regionState=CLOSED 2023-07-17 11:15:13,319 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689592513319"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592513319"}]},"ts":"1689592513319"} 2023-07-17 11:15:13,325 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-17 11:15:13,325 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; CloseRegionProcedure f3e7fd65b508e0a1f4e57bcbe5c4303e, server=jenkins-hbase4.apache.org,35719,1689592509057 in 217 msec 2023-07-17 11:15:13,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=60 2023-07-17 11:15:13,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f3e7fd65b508e0a1f4e57bcbe5c4303e, UNASSIGN in 249 msec 2023-07-17 11:15:13,329 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592513329"}]},"ts":"1689592513329"} 2023-07-17 11:15:13,331 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-17 11:15:13,333 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-17 11:15:13,336 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 275 msec 2023-07-17 11:15:13,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-17 11:15:13,376 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-17 11:15:13,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,397 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,399 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_465521657' 2023-07-17 11:15:13,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:13,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,409 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 11:15:13,410 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:13,410 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:13,410 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:13,410 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:13,410 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:13,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:13,423 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/recovered.edits] 2023-07-17 11:15:13,424 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/recovered.edits] 2023-07-17 11:15:13,427 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/recovered.edits] 2023-07-17 11:15:13,428 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/recovered.edits] 2023-07-17 11:15:13,429 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/recovered.edits] 2023-07-17 11:15:13,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-17 11:15:13,455 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c/recovered.edits/4.seqid 2023-07-17 11:15:13,455 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e/recovered.edits/4.seqid 2023-07-17 11:15:13,456 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e4b5584cf07074ba1e940bd2ffe8188c 2023-07-17 11:15:13,456 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f/recovered.edits/4.seqid 2023-07-17 11:15:13,456 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f3e7fd65b508e0a1f4e57bcbe5c4303e 2023-07-17 11:15:13,460 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7/recovered.edits/4.seqid 2023-07-17 11:15:13,461 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d4faed819126368bedc0b694e57bf52f 2023-07-17 11:15:13,461 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7cdbf5f7db60ca1f8bd006676abb4f7 2023-07-17 11:15:13,462 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5/recovered.edits/4.seqid 2023-07-17 11:15:13,462 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testTableMoveTruncateAndDrop/877b35c308fd8fdc33f99eb3e52a4eb5 2023-07-17 11:15:13,463 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-17 11:15:13,466 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,473 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-17 11:15:13,476 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-17 11:15:13,478 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,478 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-17 11:15:13,478 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592513478"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:13,478 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592513478"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:13,478 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592513478"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:13,478 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592513478"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:13,478 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592513478"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:13,481 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-17 11:15:13,481 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e4b5584cf07074ba1e940bd2ffe8188c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689592511990.e4b5584cf07074ba1e940bd2ffe8188c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f3e7fd65b508e0a1f4e57bcbe5c4303e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689592511990.f3e7fd65b508e0a1f4e57bcbe5c4303e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 877b35c308fd8fdc33f99eb3e52a4eb5, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689592511990.877b35c308fd8fdc33f99eb3e52a4eb5.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => e7cdbf5f7db60ca1f8bd006676abb4f7, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689592511990.e7cdbf5f7db60ca1f8bd006676abb4f7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => d4faed819126368bedc0b694e57bf52f, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689592511990.d4faed819126368bedc0b694e57bf52f.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-17 11:15:13,481 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-17 11:15:13,481 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592513481"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:13,484 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-17 11:15:13,487 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-17 11:15:13,490 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 101 msec 2023-07-17 11:15:13,519 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-17 11:15:13,520 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-17 11:15:13,521 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-17 11:15:13,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-17 11:15:13,544 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-17 11:15:13,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:13,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:13,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:13,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:13,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup default 2023-07-17 11:15:13,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:13,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:13,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_465521657, current retry=0 2023-07-17 11:15:13,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:13,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_465521657 => default 2023-07-17 11:15:13,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:13,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_465521657 2023-07-17 11:15:13,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:13,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:13,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:13,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:13,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:13,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:13,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:13,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:13,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:13,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:13,604 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:13,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:13,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:13,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:13,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:13,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:13,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593713621, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:13,622 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:13,624 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:13,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,626 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:13,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:13,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,657 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=498 (was 424) Potentially hanging thread: hconnection-0x70181e65-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-678376863_17 at /127.0.0.1:45132 [Receiving block BP-1649377864-172.31.14.131-1689592499733:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1649377864-172.31.14.131-1689592499733:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1649377864-172.31.14.131-1689592499733:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp268841745-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35719-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35719 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x70181e65-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-678376863_17 at /127.0.0.1:55758 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp268841745-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp268841745-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1372811857_17 at /127.0.0.1:45362 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-678376863_17 at /127.0.0.1:55804 [Receiving block BP-1649377864-172.31.14.131-1689592499733:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp268841745-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp268841745-638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-47ad00ae-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35719Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49750@0x0eff2867 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-678376863_17 at /127.0.0.1:45374 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x70181e65-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp268841745-639-acceptor-0@23d90ef-ServerConnector@55286062{HTTP/1.1, (http/1.1)}{0.0.0.0:46023} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49750@0x0eff2867-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1372811857_17 at /127.0.0.1:53268 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x70181e65-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:41739 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:41739 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-678376863_17 at /127.0.0.1:38132 [Receiving block BP-1649377864-172.31.14.131-1689592499733:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x70181e65-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1649377864-172.31.14.131-1689592499733:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x70181e65-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x62be270e-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:49750@0x0eff2867-SendThread(127.0.0.1:49750) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp268841745-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_84750054_17 at /127.0.0.1:55870 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp268841745-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e-prefix:jenkins-hbase4.apache.org,35719,1689592509057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=765 (was 681) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=556 (was 500) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=3859 (was 3402) - AvailableMemoryMB LEAK? - 2023-07-17 11:15:13,674 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498, OpenFileDescriptor=765, MaxFileDescriptor=60000, SystemLoadAverage=556, ProcessCount=172, AvailableMemoryMB=3857 2023-07-17 11:15:13,675 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-17 11:15:13,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:13,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:13,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:13,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:13,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:13,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:13,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:13,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:13,695 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:13,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:13,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:13,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:13,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:13,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:13,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593713709, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:13,710 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:13,712 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:13,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,714 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:13,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:13,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-17 11:15:13,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:13,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:36004 deadline: 1689593713716, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-17 11:15:13,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-17 11:15:13,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:13,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:36004 deadline: 1689593713717, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-17 11:15:13,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-17 11:15:13,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:13,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:36004 deadline: 1689593713719, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-17 11:15:13,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-17 11:15:13,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-17 11:15:13,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:13,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:13,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:13,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:13,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:13,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:13,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:13,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-17 11:15:13,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:13,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:13,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:13,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:13,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:13,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:13,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:13,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:13,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:13,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:13,770 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:13,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:13,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:13,777 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 11:15:13,777 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-17 11:15:13,778 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:13,778 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-17 11:15:13,778 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 11:15:13,778 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-17 11:15:13,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:13,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:13,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:13,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593713803, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:13,805 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:13,808 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:13,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,810 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:13,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:13,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,836 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=501 (was 498) Potentially hanging thread: hconnection-0x62be270e-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=765 (was 765), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=556 (was 556), ProcessCount=172 (was 172), AvailableMemoryMB=3831 (was 3857) 2023-07-17 11:15:13,836 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-17 11:15:13,859 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=501, OpenFileDescriptor=765, MaxFileDescriptor=60000, SystemLoadAverage=556, ProcessCount=172, AvailableMemoryMB=3814 2023-07-17 11:15:13,859 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-17 11:15:13,859 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-17 11:15:13,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:13,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:13,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:13,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:13,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:13,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:13,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:13,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:13,881 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:13,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:13,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:13,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:13,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:13,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:13,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593713910, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:13,911 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:13,913 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:13,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,914 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:13,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:13,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:13,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:13,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-17 11:15:13,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 11:15:13,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:13,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:13,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:13,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:13,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup bar 2023-07-17 11:15:13,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:13,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 11:15:13,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:13,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:13,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(238): Moving server region d5111e6d7162bf03312675d4d0d3f80c, which do not belong to RSGroup bar 2023-07-17 11:15:13,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, REOPEN/MOVE 2023-07-17 11:15:13,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-17 11:15:13,949 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, REOPEN/MOVE 2023-07-17 11:15:13,950 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d5111e6d7162bf03312675d4d0d3f80c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:13,950 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592513950"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592513950"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592513950"}]},"ts":"1689592513950"} 2023-07-17 11:15:13,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; CloseRegionProcedure d5111e6d7162bf03312675d4d0d3f80c, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:14,111 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:14,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d5111e6d7162bf03312675d4d0d3f80c, disabling compactions & flushes 2023-07-17 11:15:14,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:14,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:14,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. after waiting 0 ms 2023-07-17 11:15:14,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:14,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d5111e6d7162bf03312675d4d0d3f80c 1/1 column families, dataSize=6.36 KB heapSize=10.50 KB 2023-07-17 11:15:14,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.36 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/.tmp/m/173a1116b6394e8ab8ea31f8b07f3b49 2023-07-17 11:15:14,675 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 173a1116b6394e8ab8ea31f8b07f3b49 2023-07-17 11:15:14,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/.tmp/m/173a1116b6394e8ab8ea31f8b07f3b49 as hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m/173a1116b6394e8ab8ea31f8b07f3b49 2023-07-17 11:15:14,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 173a1116b6394e8ab8ea31f8b07f3b49 2023-07-17 11:15:14,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m/173a1116b6394e8ab8ea31f8b07f3b49, entries=9, sequenceid=26, filesize=5.5 K 2023-07-17 11:15:14,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.36 KB/6514, heapSize ~10.48 KB/10736, currentSize=0 B/0 for d5111e6d7162bf03312675d4d0d3f80c in 587ms, sequenceid=26, compaction requested=false 2023-07-17 11:15:14,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-17 11:15:14,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:14,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:14,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d5111e6d7162bf03312675d4d0d3f80c: 2023-07-17 11:15:14,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d5111e6d7162bf03312675d4d0d3f80c move to jenkins-hbase4.apache.org,40489,1689592505619 record at close sequenceid=26 2023-07-17 11:15:14,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:14,714 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d5111e6d7162bf03312675d4d0d3f80c, regionState=CLOSED 2023-07-17 11:15:14,714 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592514714"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592514714"}]},"ts":"1689592514714"} 2023-07-17 11:15:14,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-17 11:15:14,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; CloseRegionProcedure d5111e6d7162bf03312675d4d0d3f80c, server=jenkins-hbase4.apache.org,39617,1689592505673 in 764 msec 2023-07-17 11:15:14,718 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:14,869 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d5111e6d7162bf03312675d4d0d3f80c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:14,869 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592514869"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592514869"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592514869"}]},"ts":"1689592514869"} 2023-07-17 11:15:14,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=72, state=RUNNABLE; OpenRegionProcedure d5111e6d7162bf03312675d4d0d3f80c, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:14,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-17 11:15:15,028 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:15,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d5111e6d7162bf03312675d4d0d3f80c, NAME => 'hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:15,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 11:15:15,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. service=MultiRowMutationService 2023-07-17 11:15:15,028 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 11:15:15,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:15,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:15,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:15,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:15,030 INFO [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:15,031 DEBUG [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m 2023-07-17 11:15:15,031 DEBUG [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m 2023-07-17 11:15:15,032 INFO [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d5111e6d7162bf03312675d4d0d3f80c columnFamilyName m 2023-07-17 11:15:15,040 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 173a1116b6394e8ab8ea31f8b07f3b49 2023-07-17 11:15:15,040 DEBUG [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] regionserver.HStore(539): loaded hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m/173a1116b6394e8ab8ea31f8b07f3b49 2023-07-17 11:15:15,041 INFO [StoreOpener-d5111e6d7162bf03312675d4d0d3f80c-1] regionserver.HStore(310): Store=d5111e6d7162bf03312675d4d0d3f80c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:15,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:15,044 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:15,049 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:15,050 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d5111e6d7162bf03312675d4d0d3f80c; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3a009642, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:15,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d5111e6d7162bf03312675d4d0d3f80c: 2023-07-17 11:15:15,051 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c., pid=74, masterSystemTime=1689592515023 2023-07-17 11:15:15,053 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:15,054 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:15,054 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d5111e6d7162bf03312675d4d0d3f80c, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:15,055 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592515054"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592515054"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592515054"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592515054"}]},"ts":"1689592515054"} 2023-07-17 11:15:15,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=72 2023-07-17 11:15:15,060 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=72, state=SUCCESS; OpenRegionProcedure d5111e6d7162bf03312675d4d0d3f80c, server=jenkins-hbase4.apache.org,40489,1689592505619 in 186 msec 2023-07-17 11:15:15,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d5111e6d7162bf03312675d4d0d3f80c, REOPEN/MOVE in 1.1140 sec 2023-07-17 11:15:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527, jenkins-hbase4.apache.org,39617,1689592505673] are moved back to default 2023-07-17 11:15:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-17 11:15:15,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:15,960 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39617] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:35758 deadline: 1689592575959, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40489 startCode=1689592505619. As of locationSeqNum=26. 2023-07-17 11:15:16,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:16,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:16,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-17 11:15:16,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:16,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:16,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:16,091 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:16,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 75 2023-07-17 11:15:16,092 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39617] ipc.CallRunner(144): callId: 180 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:35756 deadline: 1689592576092, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40489 startCode=1689592505619. As of locationSeqNum=26. 2023-07-17 11:15:16,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-17 11:15:16,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-17 11:15:16,197 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:16,198 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 11:15:16,198 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:16,199 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:16,201 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:16,203 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,204 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 empty. 2023-07-17 11:15:16,204 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,204 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-17 11:15:16,225 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:16,227 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b7cb45fdcf9b3a6e217885ead8bcf3e2, NAME => 'Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:16,248 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:16,248 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing b7cb45fdcf9b3a6e217885ead8bcf3e2, disabling compactions & flushes 2023-07-17 11:15:16,248 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,248 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,248 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. after waiting 0 ms 2023-07-17 11:15:16,248 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,248 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,248 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for b7cb45fdcf9b3a6e217885ead8bcf3e2: 2023-07-17 11:15:16,251 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:16,252 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592516252"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592516252"}]},"ts":"1689592516252"} 2023-07-17 11:15:16,254 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:16,255 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:16,255 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592516255"}]},"ts":"1689592516255"} 2023-07-17 11:15:16,258 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-17 11:15:16,263 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, ASSIGN}] 2023-07-17 11:15:16,265 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, ASSIGN 2023-07-17 11:15:16,266 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:16,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-17 11:15:16,418 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:16,418 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592516418"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592516418"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592516418"}]},"ts":"1689592516418"} 2023-07-17 11:15:16,423 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=76, state=RUNNABLE; OpenRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:16,581 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7cb45fdcf9b3a6e217885ead8bcf3e2, NAME => 'Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:16,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:16,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,607 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,609 DEBUG [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/f 2023-07-17 11:15:16,609 DEBUG [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/f 2023-07-17 11:15:16,610 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7cb45fdcf9b3a6e217885ead8bcf3e2 columnFamilyName f 2023-07-17 11:15:16,611 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] regionserver.HStore(310): Store=b7cb45fdcf9b3a6e217885ead8bcf3e2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:16,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:16,619 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7cb45fdcf9b3a6e217885ead8bcf3e2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12060578720, jitterRate=0.12322892248630524}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:16,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7cb45fdcf9b3a6e217885ead8bcf3e2: 2023-07-17 11:15:16,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2., pid=77, masterSystemTime=1689592516575 2023-07-17 11:15:16,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,622 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,623 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:16,623 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592516622"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592516622"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592516622"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592516622"}]},"ts":"1689592516622"} 2023-07-17 11:15:16,631 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=76 2023-07-17 11:15:16,631 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=76, state=SUCCESS; OpenRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619 in 205 msec 2023-07-17 11:15:16,633 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-17 11:15:16,633 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, ASSIGN in 368 msec 2023-07-17 11:15:16,634 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:16,634 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592516634"}]},"ts":"1689592516634"} 2023-07-17 11:15:16,636 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-17 11:15:16,638 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:16,640 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 549 msec 2023-07-17 11:15:16,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-17 11:15:16,698 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 75 completed 2023-07-17 11:15:16,699 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-17 11:15:16,699 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:16,704 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-17 11:15:16,704 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:16,705 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-17 11:15:16,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-17 11:15:16,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:16,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 11:15:16,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:16,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:16,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-17 11:15:16,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region b7cb45fdcf9b3a6e217885ead8bcf3e2 to RSGroup bar 2023-07-17 11:15:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-17 11:15:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:16,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE 2023-07-17 11:15:16,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-17 11:15:16,718 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE 2023-07-17 11:15:16,719 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:16,719 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592516719"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592516719"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592516719"}]},"ts":"1689592516719"} 2023-07-17 11:15:16,721 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:16,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7cb45fdcf9b3a6e217885ead8bcf3e2, disabling compactions & flushes 2023-07-17 11:15:16,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. after waiting 0 ms 2023-07-17 11:15:16,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:16,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:16,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7cb45fdcf9b3a6e217885ead8bcf3e2: 2023-07-17 11:15:16,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b7cb45fdcf9b3a6e217885ead8bcf3e2 move to jenkins-hbase4.apache.org,39617,1689592505673 record at close sequenceid=2 2023-07-17 11:15:16,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:16,885 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=CLOSED 2023-07-17 11:15:16,885 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592516885"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592516885"}]},"ts":"1689592516885"} 2023-07-17 11:15:16,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-17 11:15:16,889 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619 in 166 msec 2023-07-17 11:15:16,890 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:17,040 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:17,041 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:17,041 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592517041"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592517041"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592517041"}]},"ts":"1689592517041"} 2023-07-17 11:15:17,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:17,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:17,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7cb45fdcf9b3a6e217885ead8bcf3e2, NAME => 'Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:17,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:17,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,201 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,202 DEBUG [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/f 2023-07-17 11:15:17,202 DEBUG [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/f 2023-07-17 11:15:17,202 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7cb45fdcf9b3a6e217885ead8bcf3e2 columnFamilyName f 2023-07-17 11:15:17,203 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] regionserver.HStore(310): Store=b7cb45fdcf9b3a6e217885ead8bcf3e2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:17,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7cb45fdcf9b3a6e217885ead8bcf3e2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10224072960, jitterRate=-0.04780900478363037}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:17,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7cb45fdcf9b3a6e217885ead8bcf3e2: 2023-07-17 11:15:17,211 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2., pid=80, masterSystemTime=1689592517195 2023-07-17 11:15:17,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:17,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:17,213 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:17,213 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592517213"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592517213"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592517213"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592517213"}]},"ts":"1689592517213"} 2023-07-17 11:15:17,216 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-17 11:15:17,216 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,39617,1689592505673 in 171 msec 2023-07-17 11:15:17,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE in 500 msec 2023-07-17 11:15:17,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-17 11:15:17,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-17 11:15:17,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:17,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:17,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:17,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-17 11:15:17,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:17,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-17 11:15:17,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:17,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 284 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:36004 deadline: 1689593717727, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-17 11:15:17,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup default 2023-07-17 11:15:17,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:17,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:36004 deadline: 1689593717729, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-17 11:15:17,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-17 11:15:17,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:17,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 11:15:17,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:17,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:17,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-17 11:15:17,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region b7cb45fdcf9b3a6e217885ead8bcf3e2 to RSGroup default 2023-07-17 11:15:17,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE 2023-07-17 11:15:17,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-17 11:15:17,739 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE 2023-07-17 11:15:17,740 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:17,740 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592517740"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592517740"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592517740"}]},"ts":"1689592517740"} 2023-07-17 11:15:17,741 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:17,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7cb45fdcf9b3a6e217885ead8bcf3e2, disabling compactions & flushes 2023-07-17 11:15:17,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:17,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:17,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. after waiting 0 ms 2023-07-17 11:15:17,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:17,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:17,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:17,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7cb45fdcf9b3a6e217885ead8bcf3e2: 2023-07-17 11:15:17,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b7cb45fdcf9b3a6e217885ead8bcf3e2 move to jenkins-hbase4.apache.org,40489,1689592505619 record at close sequenceid=5 2023-07-17 11:15:17,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:17,910 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=CLOSED 2023-07-17 11:15:17,910 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592517910"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592517910"}]},"ts":"1689592517910"} 2023-07-17 11:15:17,914 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-17 11:15:17,914 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,39617,1689592505673 in 171 msec 2023-07-17 11:15:17,915 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:18,065 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:18,065 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592518065"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592518065"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592518065"}]},"ts":"1689592518065"} 2023-07-17 11:15:18,067 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:18,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:18,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7cb45fdcf9b3a6e217885ead8bcf3e2, NAME => 'Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:18,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:18,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,226 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,227 DEBUG [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/f 2023-07-17 11:15:18,227 DEBUG [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/f 2023-07-17 11:15:18,228 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7cb45fdcf9b3a6e217885ead8bcf3e2 columnFamilyName f 2023-07-17 11:15:18,229 INFO [StoreOpener-b7cb45fdcf9b3a6e217885ead8bcf3e2-1] regionserver.HStore(310): Store=b7cb45fdcf9b3a6e217885ead8bcf3e2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:18,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7cb45fdcf9b3a6e217885ead8bcf3e2; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10530742400, jitterRate=-0.01924818754196167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:18,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7cb45fdcf9b3a6e217885ead8bcf3e2: 2023-07-17 11:15:18,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2., pid=83, masterSystemTime=1689592518219 2023-07-17 11:15:18,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:18,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:18,239 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:18,239 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592518239"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592518239"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592518239"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592518239"}]},"ts":"1689592518239"} 2023-07-17 11:15:18,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-17 11:15:18,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619 in 173 msec 2023-07-17 11:15:18,247 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, REOPEN/MOVE in 505 msec 2023-07-17 11:15:18,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-17 11:15:18,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-17 11:15:18,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:18,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:18,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:18,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-17 11:15:18,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:18,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 293 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:36004 deadline: 1689593718747, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-17 11:15:18,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup default 2023-07-17 11:15:18,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:18,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-17 11:15:18,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:18,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:18,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-17 11:15:18,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527, jenkins-hbase4.apache.org,39617,1689592505673] are moved back to bar 2023-07-17 11:15:18,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-17 11:15:18,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:18,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:18,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:18,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-17 11:15:18,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:18,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:18,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:18,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:18,779 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 11:15:18,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:18,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:18,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:18,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:18,791 INFO [Listener at localhost/45539] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-17 11:15:18,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-17 11:15:18,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:18,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-17 11:15:18,797 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592518796"}]},"ts":"1689592518796"} 2023-07-17 11:15:18,798 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-17 11:15:18,800 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-17 11:15:18,801 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, UNASSIGN}] 2023-07-17 11:15:18,803 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, UNASSIGN 2023-07-17 11:15:18,804 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:18,804 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592518804"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592518804"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592518804"}]},"ts":"1689592518804"} 2023-07-17 11:15:18,806 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; CloseRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:18,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-17 11:15:18,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7cb45fdcf9b3a6e217885ead8bcf3e2, disabling compactions & flushes 2023-07-17 11:15:18,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:18,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:18,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. after waiting 0 ms 2023-07-17 11:15:18,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:18,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-17 11:15:18,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2. 2023-07-17 11:15:18,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7cb45fdcf9b3a6e217885ead8bcf3e2: 2023-07-17 11:15:18,983 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=b7cb45fdcf9b3a6e217885ead8bcf3e2, regionState=CLOSED 2023-07-17 11:15:18,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:18,983 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689592518983"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592518983"}]},"ts":"1689592518983"} 2023-07-17 11:15:18,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-17 11:15:18,988 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; CloseRegionProcedure b7cb45fdcf9b3a6e217885ead8bcf3e2, server=jenkins-hbase4.apache.org,40489,1689592505619 in 179 msec 2023-07-17 11:15:18,991 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-17 11:15:18,991 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=b7cb45fdcf9b3a6e217885ead8bcf3e2, UNASSIGN in 187 msec 2023-07-17 11:15:18,992 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592518992"}]},"ts":"1689592518992"} 2023-07-17 11:15:18,994 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-17 11:15:18,998 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-17 11:15:19,001 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 208 msec 2023-07-17 11:15:19,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-17 11:15:19,099 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-17 11:15:19,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-17 11:15:19,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:19,105 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:19,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-17 11:15:19,106 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=87, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:19,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:19,113 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:19,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:19,116 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/recovered.edits] 2023-07-17 11:15:19,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-17 11:15:19,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/recovered.edits/10.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2/recovered.edits/10.seqid 2023-07-17 11:15:19,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testFailRemoveGroup/b7cb45fdcf9b3a6e217885ead8bcf3e2 2023-07-17 11:15:19,133 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-17 11:15:19,136 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=87, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:19,147 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-17 11:15:19,155 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-17 11:15:19,159 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=87, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:19,159 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-17 11:15:19,159 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592519159"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:19,161 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 11:15:19,162 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b7cb45fdcf9b3a6e217885ead8bcf3e2, NAME => 'Group_testFailRemoveGroup,,1689592516088.b7cb45fdcf9b3a6e217885ead8bcf3e2.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 11:15:19,162 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-17 11:15:19,162 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592519162"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:19,164 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-17 11:15:19,167 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=87, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-17 11:15:19,169 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 67 msec 2023-07-17 11:15:19,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-17 11:15:19,221 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-17 11:15:19,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:19,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:19,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:19,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:19,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:19,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:19,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:19,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:19,240 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:19,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:19,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:19,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:19,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:19,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:19,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:19,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 341 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593719267, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:19,268 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:19,270 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:19,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,272 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:19,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:19,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:19,298 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=500 (was 501), OpenFileDescriptor=760 (was 765), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=519 (was 556), ProcessCount=172 (was 172), AvailableMemoryMB=3417 (was 3814) 2023-07-17 11:15:19,319 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=500, OpenFileDescriptor=760, MaxFileDescriptor=60000, SystemLoadAverage=519, ProcessCount=172, AvailableMemoryMB=3414 2023-07-17 11:15:19,319 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-17 11:15:19,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:19,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:19,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:19,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:19,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:19,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:19,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:19,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:19,344 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:19,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:19,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:19,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:19,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:19,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:19,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:19,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 369 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593719362, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:19,363 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:19,369 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:19,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,370 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:19,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:19,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:19,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:19,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:19,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_815681409 2023-07-17 11:15:19,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:19,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:19,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:19,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:19,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35719] to rsgroup Group_testMultiTableMove_815681409 2023-07-17 11:15:19,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:19,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:19,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:19,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 11:15:19,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057] are moved back to default 2023-07-17 11:15:19,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_815681409 2023-07-17 11:15:19,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:19,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:19,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:19,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_815681409 2023-07-17 11:15:19,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:19,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:19,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:19,408 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:19,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 88 2023-07-17 11:15:19,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-17 11:15:19,411 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:19,415 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:19,415 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:19,416 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:19,418 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:19,420 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,421 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a empty. 2023-07-17 11:15:19,422 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,422 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-17 11:15:19,456 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:19,459 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 866363b2444242fabfa67a9edaf35f3a, NAME => 'GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:19,484 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:19,484 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 866363b2444242fabfa67a9edaf35f3a, disabling compactions & flushes 2023-07-17 11:15:19,484 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:19,484 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:19,484 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. after waiting 0 ms 2023-07-17 11:15:19,484 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:19,484 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:19,485 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 866363b2444242fabfa67a9edaf35f3a: 2023-07-17 11:15:19,488 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:19,489 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592519489"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592519489"}]},"ts":"1689592519489"} 2023-07-17 11:15:19,491 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:19,499 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:19,499 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592519499"}]},"ts":"1689592519499"} 2023-07-17 11:15:19,500 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-17 11:15:19,505 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:19,506 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:19,506 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:19,506 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:19,506 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:19,506 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, ASSIGN}] 2023-07-17 11:15:19,510 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, ASSIGN 2023-07-17 11:15:19,511 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37409,1689592505527; forceNewPlan=false, retain=false 2023-07-17 11:15:19,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-17 11:15:19,662 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:19,663 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:19,664 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592519663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592519663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592519663"}]},"ts":"1689592519663"} 2023-07-17 11:15:19,666 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:19,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-17 11:15:19,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:19,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 866363b2444242fabfa67a9edaf35f3a, NAME => 'GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:19,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:19,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,824 INFO [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,825 DEBUG [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/f 2023-07-17 11:15:19,826 DEBUG [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/f 2023-07-17 11:15:19,826 INFO [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 866363b2444242fabfa67a9edaf35f3a columnFamilyName f 2023-07-17 11:15:19,827 INFO [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] regionserver.HStore(310): Store=866363b2444242fabfa67a9edaf35f3a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:19,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,828 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:19,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:19,835 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 866363b2444242fabfa67a9edaf35f3a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11199604160, jitterRate=0.04304441809654236}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:19,835 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 866363b2444242fabfa67a9edaf35f3a: 2023-07-17 11:15:19,836 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a., pid=90, masterSystemTime=1689592519818 2023-07-17 11:15:19,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:19,837 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:19,840 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:19,840 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592519839"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592519839"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592519839"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592519839"}]},"ts":"1689592519839"} 2023-07-17 11:15:19,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-17 11:15:19,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,37409,1689592505527 in 180 msec 2023-07-17 11:15:19,850 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-17 11:15:19,850 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, ASSIGN in 342 msec 2023-07-17 11:15:19,851 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:19,851 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592519851"}]},"ts":"1689592519851"} 2023-07-17 11:15:19,852 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-17 11:15:19,859 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:19,861 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 455 msec 2023-07-17 11:15:20,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-17 11:15:20,015 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 88 completed 2023-07-17 11:15:20,016 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-17 11:15:20,016 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:20,020 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-17 11:15:20,020 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:20,020 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-17 11:15:20,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:20,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:20,026 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:20,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 91 2023-07-17 11:15:20,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-17 11:15:20,029 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:20,030 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:20,030 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:20,031 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:20,037 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:20,039 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,039 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac empty. 2023-07-17 11:15:20,040 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,040 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-17 11:15:20,057 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:20,059 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 48e2a34dceda12fe317b3e0d671e10ac, NAME => 'GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:20,070 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:20,070 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 48e2a34dceda12fe317b3e0d671e10ac, disabling compactions & flushes 2023-07-17 11:15:20,071 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,071 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,071 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. after waiting 0 ms 2023-07-17 11:15:20,071 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,071 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,071 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 48e2a34dceda12fe317b3e0d671e10ac: 2023-07-17 11:15:20,073 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:20,074 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592520074"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592520074"}]},"ts":"1689592520074"} 2023-07-17 11:15:20,076 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:20,076 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:20,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592520076"}]},"ts":"1689592520076"} 2023-07-17 11:15:20,077 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-17 11:15:20,081 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:20,081 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:20,081 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:20,082 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:20,082 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:20,082 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, ASSIGN}] 2023-07-17 11:15:20,084 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, ASSIGN 2023-07-17 11:15:20,085 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37409,1689592505527; forceNewPlan=false, retain=false 2023-07-17 11:15:20,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-17 11:15:20,235 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:20,237 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:20,237 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592520237"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592520237"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592520237"}]},"ts":"1689592520237"} 2023-07-17 11:15:20,239 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:20,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-17 11:15:20,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 48e2a34dceda12fe317b3e0d671e10ac, NAME => 'GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:20,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:20,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,399 INFO [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,401 DEBUG [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/f 2023-07-17 11:15:20,401 DEBUG [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/f 2023-07-17 11:15:20,402 INFO [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 48e2a34dceda12fe317b3e0d671e10ac columnFamilyName f 2023-07-17 11:15:20,402 INFO [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] regionserver.HStore(310): Store=48e2a34dceda12fe317b3e0d671e10ac/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:20,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:20,410 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 48e2a34dceda12fe317b3e0d671e10ac; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9667120000, jitterRate=-0.09967929124832153}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:20,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 48e2a34dceda12fe317b3e0d671e10ac: 2023-07-17 11:15:20,411 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac., pid=93, masterSystemTime=1689592520392 2023-07-17 11:15:20,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,413 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:20,414 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592520413"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592520413"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592520413"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592520413"}]},"ts":"1689592520413"} 2023-07-17 11:15:20,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-17 11:15:20,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,37409,1689592505527 in 176 msec 2023-07-17 11:15:20,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-17 11:15:20,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, ASSIGN in 335 msec 2023-07-17 11:15:20,420 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:20,421 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592520420"}]},"ts":"1689592520420"} 2023-07-17 11:15:20,422 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-17 11:15:20,425 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:20,427 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 403 msec 2023-07-17 11:15:20,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-17 11:15:20,632 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 91 completed 2023-07-17 11:15:20,632 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-17 11:15:20,632 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:20,637 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-17 11:15:20,637 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:20,637 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-17 11:15:20,638 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:20,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-17 11:15:20,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:20,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-17 11:15:20,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:20,659 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_815681409 2023-07-17 11:15:20,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_815681409 2023-07-17 11:15:20,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:20,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:20,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:20,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:20,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_815681409 2023-07-17 11:15:20,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 48e2a34dceda12fe317b3e0d671e10ac to RSGroup Group_testMultiTableMove_815681409 2023-07-17 11:15:20,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, REOPEN/MOVE 2023-07-17 11:15:20,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_815681409 2023-07-17 11:15:20,670 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, REOPEN/MOVE 2023-07-17 11:15:20,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 866363b2444242fabfa67a9edaf35f3a to RSGroup Group_testMultiTableMove_815681409 2023-07-17 11:15:20,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, REOPEN/MOVE 2023-07-17 11:15:20,671 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:20,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_815681409, current retry=0 2023-07-17 11:15:20,672 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, REOPEN/MOVE 2023-07-17 11:15:20,672 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592520671"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592520671"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592520671"}]},"ts":"1689592520671"} 2023-07-17 11:15:20,673 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:20,673 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592520673"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592520673"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592520673"}]},"ts":"1689592520673"} 2023-07-17 11:15:20,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=94, state=RUNNABLE; CloseRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:20,676 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=95, state=RUNNABLE; CloseRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,37409,1689592505527}] 2023-07-17 11:15:20,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 48e2a34dceda12fe317b3e0d671e10ac, disabling compactions & flushes 2023-07-17 11:15:20,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. after waiting 0 ms 2023-07-17 11:15:20,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:20,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:20,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 48e2a34dceda12fe317b3e0d671e10ac: 2023-07-17 11:15:20,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 48e2a34dceda12fe317b3e0d671e10ac move to jenkins-hbase4.apache.org,35719,1689592509057 record at close sequenceid=2 2023-07-17 11:15:20,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:20,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:20,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 866363b2444242fabfa67a9edaf35f3a, disabling compactions & flushes 2023-07-17 11:15:20,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:20,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:20,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. after waiting 0 ms 2023-07-17 11:15:20,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:20,841 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=CLOSED 2023-07-17 11:15:20,841 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592520841"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592520841"}]},"ts":"1689592520841"} 2023-07-17 11:15:20,845 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:20,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:20,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 866363b2444242fabfa67a9edaf35f3a: 2023-07-17 11:15:20,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 866363b2444242fabfa67a9edaf35f3a move to jenkins-hbase4.apache.org,35719,1689592509057 record at close sequenceid=2 2023-07-17 11:15:20,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:20,849 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=CLOSED 2023-07-17 11:15:20,849 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592520848"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592520848"}]},"ts":"1689592520848"} 2023-07-17 11:15:20,852 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=94 2023-07-17 11:15:20,852 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=94, state=SUCCESS; CloseRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,37409,1689592505527 in 175 msec 2023-07-17 11:15:20,853 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:20,854 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=95 2023-07-17 11:15:20,854 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=95, state=SUCCESS; CloseRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,37409,1689592505527 in 175 msec 2023-07-17 11:15:20,855 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:21,004 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:21,004 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:21,005 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592521004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592521004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592521004"}]},"ts":"1689592521004"} 2023-07-17 11:15:21,005 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592521004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592521004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592521004"}]},"ts":"1689592521004"} 2023-07-17 11:15:21,007 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=94, state=RUNNABLE; OpenRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:21,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=95, state=RUNNABLE; OpenRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:21,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:21,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 866363b2444242fabfa67a9edaf35f3a, NAME => 'GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:21,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:21,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,169 INFO [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,170 DEBUG [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/f 2023-07-17 11:15:21,171 DEBUG [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/f 2023-07-17 11:15:21,171 INFO [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 866363b2444242fabfa67a9edaf35f3a columnFamilyName f 2023-07-17 11:15:21,172 INFO [StoreOpener-866363b2444242fabfa67a9edaf35f3a-1] regionserver.HStore(310): Store=866363b2444242fabfa67a9edaf35f3a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:21,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 866363b2444242fabfa67a9edaf35f3a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10480500320, jitterRate=-0.023927345871925354}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:21,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 866363b2444242fabfa67a9edaf35f3a: 2023-07-17 11:15:21,179 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a., pid=99, masterSystemTime=1689592521162 2023-07-17 11:15:21,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:21,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:21,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:21,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 48e2a34dceda12fe317b3e0d671e10ac, NAME => 'GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:21,181 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:21,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:21,181 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592521181"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592521181"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592521181"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592521181"}]},"ts":"1689592521181"} 2023-07-17 11:15:21,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:21,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:21,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:21,183 INFO [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:21,184 DEBUG [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/f 2023-07-17 11:15:21,184 DEBUG [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/f 2023-07-17 11:15:21,185 INFO [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 48e2a34dceda12fe317b3e0d671e10ac columnFamilyName f 2023-07-17 11:15:21,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=95 2023-07-17 11:15:21,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=95, state=SUCCESS; OpenRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,35719,1689592509057 in 175 msec 2023-07-17 11:15:21,186 INFO [StoreOpener-48e2a34dceda12fe317b3e0d671e10ac-1] regionserver.HStore(310): Store=48e2a34dceda12fe317b3e0d671e10ac/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:21,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:21,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, REOPEN/MOVE in 515 msec 2023-07-17 11:15:21,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:21,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:21,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 48e2a34dceda12fe317b3e0d671e10ac; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11803199520, jitterRate=0.09925861656665802}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:21,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 48e2a34dceda12fe317b3e0d671e10ac: 2023-07-17 11:15:21,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac., pid=98, masterSystemTime=1689592521162 2023-07-17 11:15:21,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:21,194 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:21,194 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:21,195 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592521194"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592521194"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592521194"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592521194"}]},"ts":"1689592521194"} 2023-07-17 11:15:21,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=94 2023-07-17 11:15:21,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=94, state=SUCCESS; OpenRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,35719,1689592509057 in 189 msec 2023-07-17 11:15:21,199 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, REOPEN/MOVE in 530 msec 2023-07-17 11:15:21,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=94 2023-07-17 11:15:21,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_815681409. 2023-07-17 11:15:21,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:21,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:21,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:21,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-17 11:15:21,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:21,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-17 11:15:21,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:21,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:21,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:21,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_815681409 2023-07-17 11:15:21,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:21,682 INFO [Listener at localhost/45539] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-17 11:15:21,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-17 11:15:21,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:21,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-17 11:15:21,686 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592521686"}]},"ts":"1689592521686"} 2023-07-17 11:15:21,687 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-17 11:15:21,689 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-17 11:15:21,690 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, UNASSIGN}] 2023-07-17 11:15:21,691 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, UNASSIGN 2023-07-17 11:15:21,692 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:21,692 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592521692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592521692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592521692"}]},"ts":"1689592521692"} 2023-07-17 11:15:21,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; CloseRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:21,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-17 11:15:21,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 866363b2444242fabfa67a9edaf35f3a, disabling compactions & flushes 2023-07-17 11:15:21,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:21,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:21,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. after waiting 0 ms 2023-07-17 11:15:21,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:21,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:21,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a. 2023-07-17 11:15:21,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 866363b2444242fabfa67a9edaf35f3a: 2023-07-17 11:15:21,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:21,856 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=866363b2444242fabfa67a9edaf35f3a, regionState=CLOSED 2023-07-17 11:15:21,856 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592521856"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592521856"}]},"ts":"1689592521856"} 2023-07-17 11:15:21,860 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-17 11:15:21,860 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; CloseRegionProcedure 866363b2444242fabfa67a9edaf35f3a, server=jenkins-hbase4.apache.org,35719,1689592509057 in 165 msec 2023-07-17 11:15:21,863 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-17 11:15:21,863 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=866363b2444242fabfa67a9edaf35f3a, UNASSIGN in 170 msec 2023-07-17 11:15:21,864 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592521864"}]},"ts":"1689592521864"} 2023-07-17 11:15:21,866 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-17 11:15:21,868 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-17 11:15:21,871 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 186 msec 2023-07-17 11:15:21,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-17 11:15:21,989 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-17 11:15:21,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-17 11:15:21,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:21,996 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:21,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_815681409' 2023-07-17 11:15:21,997 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=103, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:21,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:21,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:22,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,002 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:22,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:22,004 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/recovered.edits] 2023-07-17 11:15:22,013 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/recovered.edits/7.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a/recovered.edits/7.seqid 2023-07-17 11:15:22,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-17 11:15:22,014 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveA/866363b2444242fabfa67a9edaf35f3a 2023-07-17 11:15:22,015 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-17 11:15:22,018 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=103, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:22,020 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-17 11:15:22,022 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-17 11:15:22,024 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=103, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:22,024 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-17 11:15:22,024 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592522024"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:22,027 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 11:15:22,027 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 866363b2444242fabfa67a9edaf35f3a, NAME => 'GrouptestMultiTableMoveA,,1689592519403.866363b2444242fabfa67a9edaf35f3a.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 11:15:22,027 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-17 11:15:22,027 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592522027"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:22,030 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-17 11:15:22,032 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=103, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-17 11:15:22,034 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 42 msec 2023-07-17 11:15:22,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-17 11:15:22,116 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-17 11:15:22,117 INFO [Listener at localhost/45539] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-17 11:15:22,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-17 11:15:22,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:22,124 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592522124"}]},"ts":"1689592522124"} 2023-07-17 11:15:22,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-17 11:15:22,126 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-17 11:15:22,128 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-17 11:15:22,129 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, UNASSIGN}] 2023-07-17 11:15:22,132 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, UNASSIGN 2023-07-17 11:15:22,132 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:22,133 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592522132"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592522132"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592522132"}]},"ts":"1689592522132"} 2023-07-17 11:15:22,137 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:22,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-17 11:15:22,292 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:22,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 48e2a34dceda12fe317b3e0d671e10ac, disabling compactions & flushes 2023-07-17 11:15:22,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:22,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:22,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. after waiting 0 ms 2023-07-17 11:15:22,294 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:22,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:22,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac. 2023-07-17 11:15:22,300 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 48e2a34dceda12fe317b3e0d671e10ac: 2023-07-17 11:15:22,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:22,303 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=48e2a34dceda12fe317b3e0d671e10ac, regionState=CLOSED 2023-07-17 11:15:22,303 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689592522303"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592522303"}]},"ts":"1689592522303"} 2023-07-17 11:15:22,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-17 11:15:22,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 48e2a34dceda12fe317b3e0d671e10ac, server=jenkins-hbase4.apache.org,35719,1689592509057 in 168 msec 2023-07-17 11:15:22,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-17 11:15:22,309 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=48e2a34dceda12fe317b3e0d671e10ac, UNASSIGN in 178 msec 2023-07-17 11:15:22,310 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592522310"}]},"ts":"1689592522310"} 2023-07-17 11:15:22,311 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-17 11:15:22,314 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-17 11:15:22,316 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 196 msec 2023-07-17 11:15:22,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-17 11:15:22,427 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 104 completed 2023-07-17 11:15:22,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-17 11:15:22,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:22,431 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:22,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_815681409' 2023-07-17 11:15:22,432 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:22,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:22,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:22,435 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:22,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-17 11:15:22,438 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/recovered.edits] 2023-07-17 11:15:22,444 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/recovered.edits/7.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac/recovered.edits/7.seqid 2023-07-17 11:15:22,444 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/GrouptestMultiTableMoveB/48e2a34dceda12fe317b3e0d671e10ac 2023-07-17 11:15:22,444 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-17 11:15:22,447 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:22,449 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-17 11:15:22,450 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-17 11:15:22,451 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:22,451 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-17 11:15:22,452 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592522451"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:22,453 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 11:15:22,453 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 48e2a34dceda12fe317b3e0d671e10ac, NAME => 'GrouptestMultiTableMoveB,,1689592520022.48e2a34dceda12fe317b3e0d671e10ac.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 11:15:22,453 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-17 11:15:22,453 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592522453"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:22,455 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-17 11:15:22,460 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-17 11:15:22,462 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 33 msec 2023-07-17 11:15:22,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-17 11:15:22,538 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-17 11:15:22,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:22,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:22,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:22,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35719] to rsgroup default 2023-07-17 11:15:22,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_815681409 2023-07-17 11:15:22,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:22,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_815681409, current retry=0 2023-07-17 11:15:22,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057] are moved back to Group_testMultiTableMove_815681409 2023-07-17 11:15:22,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_815681409 => default 2023-07-17 11:15:22,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_815681409 2023-07-17 11:15:22,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:22,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:22,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:22,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:22,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:22,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:22,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:22,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:22,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:22,568 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:22,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:22,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:22,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:22,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:22,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:22,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 507 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593722579, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:22,580 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:22,582 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:22,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,583 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:22,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:22,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,604 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=499 (was 500), OpenFileDescriptor=749 (was 760), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 519), ProcessCount=172 (was 172), AvailableMemoryMB=3183 (was 3414) 2023-07-17 11:15:22,620 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=499, OpenFileDescriptor=749, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=172, AvailableMemoryMB=3182 2023-07-17 11:15:22,621 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-17 11:15:22,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:22,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:22,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:22,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:22,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:22,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:22,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:22,635 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:22,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:22,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:22,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:22,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:22,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:22,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 535 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593722647, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:22,648 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:22,650 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:22,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,651 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:22,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:22,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:22,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-17 11:15:22,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 11:15:22,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:22,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:22,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup oldGroup 2023-07-17 11:15:22,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 11:15:22,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:22,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 11:15:22,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to default 2023-07-17 11:15:22,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-17 11:15:22,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-17 11:15:22,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-17 11:15:22,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:22,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-17 11:15:22,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-17 11:15:22,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 11:15:22,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:22,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:22,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39617] to rsgroup anotherRSGroup 2023-07-17 11:15:22,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-17 11:15:22,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 11:15:22,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:22,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 11:15:22,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39617,1689592505673] are moved back to default 2023-07-17 11:15:22,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-17 11:15:22,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-17 11:15:22,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-17 11:15:22,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:22,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-17 11:15:22,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:22,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 569 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:36004 deadline: 1689593722905, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-17 11:15:22,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-17 11:15:22,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:22,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:36004 deadline: 1689593722912, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-17 11:15:22,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-17 11:15:22,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:22,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:36004 deadline: 1689593722913, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-17 11:15:22,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-17 11:15:22,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:22,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:36004 deadline: 1689593722915, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-17 11:15:22,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:22,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:22,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:22,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:22,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:22,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39617] to rsgroup default 2023-07-17 11:15:22,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-17 11:15:22,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 11:15:22,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:22,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-17 11:15:22,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39617,1689592505673] are moved back to anotherRSGroup 2023-07-17 11:15:22,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-17 11:15:22,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-17 11:15:22,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 11:15:22,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-17 11:15:22,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:22,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:22,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:22,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:22,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup default 2023-07-17 11:15:22,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-17 11:15:22,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:22,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-17 11:15:22,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to oldGroup 2023-07-17 11:15:22,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-17 11:15:22,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-17 11:15:22,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:22,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:22,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:22,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:22,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:22,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:22,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:22,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:22,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:22,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:22,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:22,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:22,999 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:22,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:23,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:23,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:23,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:23,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:23,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:23,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:23,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:23,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:23,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 611 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593723013, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:23,014 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:23,016 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:23,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:23,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:23,017 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:23,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:23,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:23,047 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=503 (was 499) Potentially hanging thread: hconnection-0x62be270e-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=749 (was 749), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 509), ProcessCount=172 (was 172), AvailableMemoryMB=3097 (was 3182) 2023-07-17 11:15:23,047 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-17 11:15:23,069 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=502, OpenFileDescriptor=749, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=172, AvailableMemoryMB=3096 2023-07-17 11:15:23,069 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-17 11:15:23,069 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-17 11:15:23,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:23,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:23,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:23,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:23,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:23,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:23,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:23,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:23,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:23,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:23,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:23,092 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:23,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:23,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:23,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:23,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:23,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:23,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:23,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:23,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:23,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:23,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 639 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593723104, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:23,104 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:23,106 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:23,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:23,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:23,107 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:23,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:23,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:23,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:23,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:23,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-17 11:15:23,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:23,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:23,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:23,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:23,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:23,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:23,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:23,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup oldgroup 2023-07-17 11:15:23,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:23,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:23,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:23,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:23,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 11:15:23,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to default 2023-07-17 11:15:23,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-17 11:15:23,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:23,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:23,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:23,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-17 11:15:23,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:23,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:23,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-17 11:15:23,149 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:23,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 108 2023-07-17 11:15:23,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-17 11:15:23,152 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:23,153 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:23,154 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:23,154 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:23,157 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:23,160 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,161 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/testRename/2b74521ed4637f75fb35cc5495c946be empty. 2023-07-17 11:15:23,161 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,161 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-17 11:15:23,186 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:23,188 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2b74521ed4637f75fb35cc5495c946be, NAME => 'testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:23,206 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:23,206 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 2b74521ed4637f75fb35cc5495c946be, disabling compactions & flushes 2023-07-17 11:15:23,206 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,206 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,206 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. after waiting 0 ms 2023-07-17 11:15:23,206 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,206 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,206 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 2b74521ed4637f75fb35cc5495c946be: 2023-07-17 11:15:23,209 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:23,210 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592523209"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592523209"}]},"ts":"1689592523209"} 2023-07-17 11:15:23,211 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:23,212 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:23,212 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592523212"}]},"ts":"1689592523212"} 2023-07-17 11:15:23,213 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-17 11:15:23,216 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:23,216 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:23,217 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:23,217 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:23,220 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, ASSIGN}] 2023-07-17 11:15:23,221 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, ASSIGN 2023-07-17 11:15:23,222 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:23,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-17 11:15:23,372 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:23,374 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:23,374 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592523374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592523374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592523374"}]},"ts":"1689592523374"} 2023-07-17 11:15:23,376 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE; OpenRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:23,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-17 11:15:23,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b74521ed4637f75fb35cc5495c946be, NAME => 'testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:23,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:23,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,533 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,535 DEBUG [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/tr 2023-07-17 11:15:23,535 DEBUG [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/tr 2023-07-17 11:15:23,535 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b74521ed4637f75fb35cc5495c946be columnFamilyName tr 2023-07-17 11:15:23,536 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] regionserver.HStore(310): Store=2b74521ed4637f75fb35cc5495c946be/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:23,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:23,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b74521ed4637f75fb35cc5495c946be; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11596338240, jitterRate=0.07999315857887268}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:23,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b74521ed4637f75fb35cc5495c946be: 2023-07-17 11:15:23,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be., pid=110, masterSystemTime=1689592523527 2023-07-17 11:15:23,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,544 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:23,545 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592523544"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592523544"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592523544"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592523544"}]},"ts":"1689592523544"} 2023-07-17 11:15:23,548 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-17 11:15:23,548 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; OpenRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,39617,1689592505673 in 170 msec 2023-07-17 11:15:23,550 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-17 11:15:23,550 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, ASSIGN in 331 msec 2023-07-17 11:15:23,551 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:23,551 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592523551"}]},"ts":"1689592523551"} 2023-07-17 11:15:23,552 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-17 11:15:23,554 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:23,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; CreateTableProcedure table=testRename in 411 msec 2023-07-17 11:15:23,727 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 11:15:23,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-17 11:15:23,755 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 108 completed 2023-07-17 11:15:23,755 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-17 11:15:23,755 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:23,759 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-17 11:15:23,759 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:23,759 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-17 11:15:23,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-17 11:15:23,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:23,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:23,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:23,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:23,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-17 11:15:23,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 2b74521ed4637f75fb35cc5495c946be to RSGroup oldgroup 2023-07-17 11:15:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:23,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE 2023-07-17 11:15:23,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-17 11:15:23,771 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE 2023-07-17 11:15:23,772 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:23,773 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592523772"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592523772"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592523772"}]},"ts":"1689592523772"} 2023-07-17 11:15:23,774 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:23,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b74521ed4637f75fb35cc5495c946be, disabling compactions & flushes 2023-07-17 11:15:23,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. after waiting 0 ms 2023-07-17 11:15:23,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:23,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:23,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b74521ed4637f75fb35cc5495c946be: 2023-07-17 11:15:23,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2b74521ed4637f75fb35cc5495c946be move to jenkins-hbase4.apache.org,35719,1689592509057 record at close sequenceid=2 2023-07-17 11:15:23,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:23,943 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=CLOSED 2023-07-17 11:15:23,943 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592523943"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592523943"}]},"ts":"1689592523943"} 2023-07-17 11:15:23,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-17 11:15:23,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,39617,1689592505673 in 173 msec 2023-07-17 11:15:23,949 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35719,1689592509057; forceNewPlan=false, retain=false 2023-07-17 11:15:24,100 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:24,100 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:24,100 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592524100"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592524100"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592524100"}]},"ts":"1689592524100"} 2023-07-17 11:15:24,107 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=111, state=RUNNABLE; OpenRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:24,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:24,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b74521ed4637f75fb35cc5495c946be, NAME => 'testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:24,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:24,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:24,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:24,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:24,268 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:24,270 DEBUG [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/tr 2023-07-17 11:15:24,270 DEBUG [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/tr 2023-07-17 11:15:24,271 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b74521ed4637f75fb35cc5495c946be columnFamilyName tr 2023-07-17 11:15:24,272 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] regionserver.HStore(310): Store=2b74521ed4637f75fb35cc5495c946be/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:24,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:24,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:24,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:24,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b74521ed4637f75fb35cc5495c946be; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10794633440, jitterRate=0.005328580737113953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:24,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b74521ed4637f75fb35cc5495c946be: 2023-07-17 11:15:24,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be., pid=113, masterSystemTime=1689592524259 2023-07-17 11:15:24,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:24,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:24,284 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:24,284 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592524284"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592524284"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592524284"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592524284"}]},"ts":"1689592524284"} 2023-07-17 11:15:24,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=111 2023-07-17 11:15:24,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=111, state=SUCCESS; OpenRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,35719,1689592509057 in 179 msec 2023-07-17 11:15:24,291 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE in 519 msec 2023-07-17 11:15:24,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=111 2023-07-17 11:15:24,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-17 11:15:24,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:24,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:24,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:24,777 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:24,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-17 11:15:24,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:24,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-17 11:15:24,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:24,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-17 11:15:24,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:24,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:24,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:24,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-17 11:15:24,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:24,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 11:15:24,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:24,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:24,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:24,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:24,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:24,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:24,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39617] to rsgroup normal 2023-07-17 11:15:24,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:24,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 11:15:24,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:24,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:24,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:24,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 11:15:24,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39617,1689592505673] are moved back to default 2023-07-17 11:15:24,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-17 11:15:24,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:24,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:24,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:24,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-17 11:15:24,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:24,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:24,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-17 11:15:24,808 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:24,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 114 2023-07-17 11:15:24,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 11:15:24,810 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:24,811 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 11:15:24,811 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:24,812 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:24,812 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:24,818 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:24,820 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:24,821 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 empty. 2023-07-17 11:15:24,822 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:24,822 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-17 11:15:24,843 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:24,845 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 21527a315e64c88028dc354e9a834764, NAME => 'unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:24,886 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:24,887 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 21527a315e64c88028dc354e9a834764, disabling compactions & flushes 2023-07-17 11:15:24,887 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:24,887 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:24,887 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. after waiting 0 ms 2023-07-17 11:15:24,887 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:24,887 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:24,887 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 21527a315e64c88028dc354e9a834764: 2023-07-17 11:15:24,889 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:24,890 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592524890"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592524890"}]},"ts":"1689592524890"} 2023-07-17 11:15:24,892 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:24,892 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:24,892 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592524892"}]},"ts":"1689592524892"} 2023-07-17 11:15:24,893 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-17 11:15:24,897 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, ASSIGN}] 2023-07-17 11:15:24,899 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, ASSIGN 2023-07-17 11:15:24,899 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:24,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 11:15:25,051 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:25,052 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592525051"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592525051"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592525051"}]},"ts":"1689592525051"} 2023-07-17 11:15:25,054 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:25,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 11:15:25,217 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21527a315e64c88028dc354e9a834764, NAME => 'unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:25,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:25,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,220 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,222 DEBUG [StoreOpener-21527a315e64c88028dc354e9a834764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/ut 2023-07-17 11:15:25,222 DEBUG [StoreOpener-21527a315e64c88028dc354e9a834764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/ut 2023-07-17 11:15:25,223 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21527a315e64c88028dc354e9a834764 columnFamilyName ut 2023-07-17 11:15:25,224 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] regionserver.HStore(310): Store=21527a315e64c88028dc354e9a834764/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:25,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:25,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21527a315e64c88028dc354e9a834764; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11067011040, jitterRate=0.03069572150707245}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:25,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21527a315e64c88028dc354e9a834764: 2023-07-17 11:15:25,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764., pid=116, masterSystemTime=1689592525209 2023-07-17 11:15:25,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,253 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:25,253 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592525253"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592525253"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592525253"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592525253"}]},"ts":"1689592525253"} 2023-07-17 11:15:25,258 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-17 11:15:25,258 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,40489,1689592505619 in 201 msec 2023-07-17 11:15:25,260 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-17 11:15:25,260 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, ASSIGN in 361 msec 2023-07-17 11:15:25,261 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:25,261 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592525261"}]},"ts":"1689592525261"} 2023-07-17 11:15:25,266 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-17 11:15:25,269 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:25,271 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=unmovedTable in 463 msec 2023-07-17 11:15:25,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-17 11:15:25,413 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 114 completed 2023-07-17 11:15:25,413 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-17 11:15:25,413 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:25,416 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-17 11:15:25,416 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:25,416 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-17 11:15:25,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-17 11:15:25,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-17 11:15:25,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 11:15:25,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:25,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:25,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:25,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-17 11:15:25,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 21527a315e64c88028dc354e9a834764 to RSGroup normal 2023-07-17 11:15:25,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE 2023-07-17 11:15:25,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-17 11:15:25,425 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE 2023-07-17 11:15:25,426 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:25,426 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592525426"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592525426"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592525426"}]},"ts":"1689592525426"} 2023-07-17 11:15:25,427 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:25,521 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-17 11:15:25,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21527a315e64c88028dc354e9a834764, disabling compactions & flushes 2023-07-17 11:15:25,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. after waiting 0 ms 2023-07-17 11:15:25,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:25,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21527a315e64c88028dc354e9a834764: 2023-07-17 11:15:25,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 21527a315e64c88028dc354e9a834764 move to jenkins-hbase4.apache.org,39617,1689592505673 record at close sequenceid=2 2023-07-17 11:15:25,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,589 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=CLOSED 2023-07-17 11:15:25,589 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592525589"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592525589"}]},"ts":"1689592525589"} 2023-07-17 11:15:25,592 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-17 11:15:25,592 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,40489,1689592505619 in 163 msec 2023-07-17 11:15:25,592 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:25,743 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:25,743 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592525743"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592525743"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592525743"}]},"ts":"1689592525743"} 2023-07-17 11:15:25,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:25,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21527a315e64c88028dc354e9a834764, NAME => 'unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:25,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:25,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,903 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,904 DEBUG [StoreOpener-21527a315e64c88028dc354e9a834764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/ut 2023-07-17 11:15:25,904 DEBUG [StoreOpener-21527a315e64c88028dc354e9a834764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/ut 2023-07-17 11:15:25,904 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21527a315e64c88028dc354e9a834764 columnFamilyName ut 2023-07-17 11:15:25,905 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] regionserver.HStore(310): Store=21527a315e64c88028dc354e9a834764/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:25,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:25,910 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21527a315e64c88028dc354e9a834764; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10706167520, jitterRate=-0.0029104501008987427}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:25,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21527a315e64c88028dc354e9a834764: 2023-07-17 11:15:25,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764., pid=119, masterSystemTime=1689592525897 2023-07-17 11:15:25,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:25,913 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:25,913 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592525913"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592525913"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592525913"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592525913"}]},"ts":"1689592525913"} 2023-07-17 11:15:25,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-17 11:15:25,915 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,39617,1689592505673 in 169 msec 2023-07-17 11:15:25,916 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE in 491 msec 2023-07-17 11:15:26,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-17 11:15:26,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-17 11:15:26,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:26,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:26,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:26,431 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:26,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-17 11:15:26,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:26,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-17 11:15:26,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:26,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-17 11:15:26,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:26,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-17 11:15:26,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 11:15:26,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:26,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:26,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 11:15:26,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-17 11:15:26,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-17 11:15:26,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:26,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:26,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-17 11:15:26,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:26,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-17 11:15:26,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:26,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-17 11:15:26,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:26,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:26,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:26,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-17 11:15:26,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 11:15:26,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:26,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:26,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 11:15:26,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:26,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-17 11:15:26,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 21527a315e64c88028dc354e9a834764 to RSGroup default 2023-07-17 11:15:26,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE 2023-07-17 11:15:26,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-17 11:15:26,458 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE 2023-07-17 11:15:26,459 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:26,459 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592526459"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592526459"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592526459"}]},"ts":"1689592526459"} 2023-07-17 11:15:26,460 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:26,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21527a315e64c88028dc354e9a834764, disabling compactions & flushes 2023-07-17 11:15:26,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:26,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:26,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. after waiting 0 ms 2023-07-17 11:15:26,614 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:26,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:26,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:26,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21527a315e64c88028dc354e9a834764: 2023-07-17 11:15:26,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 21527a315e64c88028dc354e9a834764 move to jenkins-hbase4.apache.org,40489,1689592505619 record at close sequenceid=5 2023-07-17 11:15:26,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,621 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=CLOSED 2023-07-17 11:15:26,621 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592526621"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592526621"}]},"ts":"1689592526621"} 2023-07-17 11:15:26,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-17 11:15:26,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,39617,1689592505673 in 162 msec 2023-07-17 11:15:26,624 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:26,775 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:26,775 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592526774"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592526774"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592526774"}]},"ts":"1689592526774"} 2023-07-17 11:15:26,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:26,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:26,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21527a315e64c88028dc354e9a834764, NAME => 'unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:26,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:26,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,940 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,941 DEBUG [StoreOpener-21527a315e64c88028dc354e9a834764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/ut 2023-07-17 11:15:26,941 DEBUG [StoreOpener-21527a315e64c88028dc354e9a834764-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/ut 2023-07-17 11:15:26,941 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21527a315e64c88028dc354e9a834764 columnFamilyName ut 2023-07-17 11:15:26,942 INFO [StoreOpener-21527a315e64c88028dc354e9a834764-1] regionserver.HStore(310): Store=21527a315e64c88028dc354e9a834764/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:26,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:26,949 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21527a315e64c88028dc354e9a834764; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9721735200, jitterRate=-0.0945928543806076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:26,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21527a315e64c88028dc354e9a834764: 2023-07-17 11:15:26,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764., pid=122, masterSystemTime=1689592526928 2023-07-17 11:15:26,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:26,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:26,952 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=21527a315e64c88028dc354e9a834764, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:26,952 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689592526952"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592526952"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592526952"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592526952"}]},"ts":"1689592526952"} 2023-07-17 11:15:26,958 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-17 11:15:26,958 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 21527a315e64c88028dc354e9a834764, server=jenkins-hbase4.apache.org,40489,1689592505619 in 177 msec 2023-07-17 11:15:26,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=21527a315e64c88028dc354e9a834764, REOPEN/MOVE in 501 msec 2023-07-17 11:15:27,156 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-17 11:15:27,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-17 11:15:27,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-17 11:15:27,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:27,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39617] to rsgroup default 2023-07-17 11:15:27,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-17 11:15:27,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:27,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:27,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 11:15:27,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:27,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-17 11:15:27,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,39617,1689592505673] are moved back to normal 2023-07-17 11:15:27,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-17 11:15:27,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:27,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-17 11:15:27,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:27,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:27,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 11:15:27,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-17 11:15:27,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:27,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:27,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:27,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:27,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:27,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:27,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:27,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:27,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 11:15:27,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:27,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:27,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-17 11:15:27,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:27,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 11:15:27,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:27,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-17 11:15:27,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(345): Moving region 2b74521ed4637f75fb35cc5495c946be to RSGroup default 2023-07-17 11:15:27,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE 2023-07-17 11:15:27,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-17 11:15:27,503 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE 2023-07-17 11:15:27,504 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:27,504 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592527504"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592527504"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592527504"}]},"ts":"1689592527504"} 2023-07-17 11:15:27,505 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,35719,1689592509057}] 2023-07-17 11:15:27,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b74521ed4637f75fb35cc5495c946be, disabling compactions & flushes 2023-07-17 11:15:27,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:27,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:27,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. after waiting 0 ms 2023-07-17 11:15:27,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:27,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-17 11:15:27,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:27,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b74521ed4637f75fb35cc5495c946be: 2023-07-17 11:15:27,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2b74521ed4637f75fb35cc5495c946be move to jenkins-hbase4.apache.org,39617,1689592505673 record at close sequenceid=5 2023-07-17 11:15:27,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,666 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=CLOSED 2023-07-17 11:15:27,666 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592527666"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592527666"}]},"ts":"1689592527666"} 2023-07-17 11:15:27,668 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-17 11:15:27,669 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,35719,1689592509057 in 162 msec 2023-07-17 11:15:27,669 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:27,819 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:27,820 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:27,820 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592527820"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592527820"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592527820"}]},"ts":"1689592527820"} 2023-07-17 11:15:27,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:27,977 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:27,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b74521ed4637f75fb35cc5495c946be, NAME => 'testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:27,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:27,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,980 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,981 DEBUG [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/tr 2023-07-17 11:15:27,981 DEBUG [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/tr 2023-07-17 11:15:27,982 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b74521ed4637f75fb35cc5495c946be columnFamilyName tr 2023-07-17 11:15:27,982 INFO [StoreOpener-2b74521ed4637f75fb35cc5495c946be-1] regionserver.HStore(310): Store=2b74521ed4637f75fb35cc5495c946be/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:27,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:27,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b74521ed4637f75fb35cc5495c946be; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10369706880, jitterRate=-0.03424578905105591}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:27,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b74521ed4637f75fb35cc5495c946be: 2023-07-17 11:15:27,993 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be., pid=125, masterSystemTime=1689592527973 2023-07-17 11:15:27,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:27,996 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:28,000 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=2b74521ed4637f75fb35cc5495c946be, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:28,000 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689592528000"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592528000"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592528000"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592528000"}]},"ts":"1689592528000"} 2023-07-17 11:15:28,003 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-17 11:15:28,003 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 2b74521ed4637f75fb35cc5495c946be, server=jenkins-hbase4.apache.org,39617,1689592505673 in 179 msec 2023-07-17 11:15:28,004 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=2b74521ed4637f75fb35cc5495c946be, REOPEN/MOVE in 500 msec 2023-07-17 11:15:28,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-17 11:15:28,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-17 11:15:28,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:28,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup default 2023-07-17 11:15:28,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-17 11:15:28,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:28,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-17 11:15:28,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to newgroup 2023-07-17 11:15:28,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-17 11:15:28,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:28,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-17 11:15:28,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:28,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:28,524 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:28,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:28,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:28,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:28,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:28,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:28,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:28,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 759 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593728543, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:28,544 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:28,545 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:28,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,546 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:28,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:28,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:28,566 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=497 (was 502), OpenFileDescriptor=743 (was 749), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=484 (was 509), ProcessCount=172 (was 172), AvailableMemoryMB=3046 (was 3096) 2023-07-17 11:15:28,586 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=497, OpenFileDescriptor=743, MaxFileDescriptor=60000, SystemLoadAverage=484, ProcessCount=172, AvailableMemoryMB=3045 2023-07-17 11:15:28,587 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-17 11:15:28,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:28,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:28,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:28,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:28,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:28,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:28,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:28,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:28,607 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:28,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:28,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:28,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:28,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:28,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:28,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:28,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 787 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593728618, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:28,619 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:28,620 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:28,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,621 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:28,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:28,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:28,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-17 11:15:28,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:28,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-17 11:15:28,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-17 11:15:28,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-17 11:15:28,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:28,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-17 11:15:28,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:28,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 799 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:36004 deadline: 1689593728629, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-17 11:15:28,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-17 11:15:28,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:28,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:36004 deadline: 1689593728631, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-17 11:15:28,633 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-17 11:15:28,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-17 11:15:28,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-17 11:15:28,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:28,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:36004 deadline: 1689593728637, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-17 11:15:28,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:28,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:28,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:28,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:28,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:28,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:28,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:28,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:28,650 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:28,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:28,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:28,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:28,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:28,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:28,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:28,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 830 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593728663, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:28,666 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:28,668 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:28,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,668 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:28,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:28,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:28,685 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=501 (was 497) Potentially hanging thread: hconnection-0x70181e65-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x70181e65-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=743 (was 743), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=484 (was 484), ProcessCount=172 (was 172), AvailableMemoryMB=3044 (was 3045) 2023-07-17 11:15:28,686 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-17 11:15:28,704 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=501, OpenFileDescriptor=743, MaxFileDescriptor=60000, SystemLoadAverage=484, ProcessCount=172, AvailableMemoryMB=3043 2023-07-17 11:15:28,704 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=501 is superior to 500 2023-07-17 11:15:28,704 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-17 11:15:28,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:28,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:28,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:28,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:28,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:28,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:28,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:28,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:28,718 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:28,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:28,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:28,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:28,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:28,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:28,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:28,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 858 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593728728, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:28,728 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:28,730 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:28,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,731 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:28,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:28,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:28,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:28,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:28,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_38972067 2023-07-17 11:15:28,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_38972067 2023-07-17 11:15:28,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:28,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:28,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:28,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup Group_testDisabledTableMove_38972067 2023-07-17 11:15:28,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_38972067 2023-07-17 11:15:28,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:28,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:28,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-17 11:15:28,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to default 2023-07-17 11:15:28,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_38972067 2023-07-17 11:15:28,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:28,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:28,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:28,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_38972067 2023-07-17 11:15:28,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:28,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:28,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:28,764 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:28,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 126 2023-07-17 11:15:28,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-17 11:15:28,766 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_38972067 2023-07-17 11:15:28,767 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:28,767 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:28,767 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:28,770 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:28,774 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:28,774 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:28,774 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:28,774 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:28,774 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:28,775 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb empty. 2023-07-17 11:15:28,775 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de empty. 2023-07-17 11:15:28,775 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd empty. 2023-07-17 11:15:28,775 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:28,776 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100 empty. 2023-07-17 11:15:28,776 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:28,776 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:28,776 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd empty. 2023-07-17 11:15:28,777 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:28,777 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:28,777 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-17 11:15:28,816 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:28,818 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 02d5263e5d9e0b092b7e5800d7ceb3de, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:28,818 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8cf4bf7273740a80332d73b8051471eb, NAME => 'Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:28,818 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 15921b7ea0c5590ea84f52463bdee0cd, NAME => 'Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:28,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-17 11:15:28,869 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:28,870 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 8cf4bf7273740a80332d73b8051471eb, disabling compactions & flushes 2023-07-17 11:15:28,870 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:28,870 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:28,870 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. after waiting 0 ms 2023-07-17 11:15:28,870 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:28,870 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:28,870 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 8cf4bf7273740a80332d73b8051471eb: 2023-07-17 11:15:28,870 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4d99053af68b38d1be4d57c4204835cd, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:28,877 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:28,877 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 02d5263e5d9e0b092b7e5800d7ceb3de, disabling compactions & flushes 2023-07-17 11:15:28,877 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:28,877 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:28,877 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. after waiting 0 ms 2023-07-17 11:15:28,877 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:28,878 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:28,878 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 02d5263e5d9e0b092b7e5800d7ceb3de: 2023-07-17 11:15:28,878 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => b3d81b6b9bca1199bf9aba1491c57100, NAME => 'Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp 2023-07-17 11:15:28,880 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:28,880 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 15921b7ea0c5590ea84f52463bdee0cd, disabling compactions & flushes 2023-07-17 11:15:28,880 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:28,881 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:28,881 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. after waiting 0 ms 2023-07-17 11:15:28,881 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:28,881 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:28,881 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 15921b7ea0c5590ea84f52463bdee0cd: 2023-07-17 11:15:28,898 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:28,898 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 4d99053af68b38d1be4d57c4204835cd, disabling compactions & flushes 2023-07-17 11:15:28,899 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:28,899 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:28,899 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. after waiting 0 ms 2023-07-17 11:15:28,899 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:28,899 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:28,899 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 4d99053af68b38d1be4d57c4204835cd: 2023-07-17 11:15:28,917 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:28,918 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing b3d81b6b9bca1199bf9aba1491c57100, disabling compactions & flushes 2023-07-17 11:15:28,918 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:28,918 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:28,918 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. after waiting 0 ms 2023-07-17 11:15:28,918 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:28,918 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:28,918 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for b3d81b6b9bca1199bf9aba1491c57100: 2023-07-17 11:15:28,921 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:28,922 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592528922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592528922"}]},"ts":"1689592528922"} 2023-07-17 11:15:28,923 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592528922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592528922"}]},"ts":"1689592528922"} 2023-07-17 11:15:28,923 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592528922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592528922"}]},"ts":"1689592528922"} 2023-07-17 11:15:28,923 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592528922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592528922"}]},"ts":"1689592528922"} 2023-07-17 11:15:28,923 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592528922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592528922"}]},"ts":"1689592528922"} 2023-07-17 11:15:28,926 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-17 11:15:28,926 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:28,927 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592528927"}]},"ts":"1689592528927"} 2023-07-17 11:15:28,928 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-17 11:15:28,938 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:28,938 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:28,938 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:28,938 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:28,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8cf4bf7273740a80332d73b8051471eb, ASSIGN}, {pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=15921b7ea0c5590ea84f52463bdee0cd, ASSIGN}, {pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02d5263e5d9e0b092b7e5800d7ceb3de, ASSIGN}, {pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d99053af68b38d1be4d57c4204835cd, ASSIGN}, {pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b3d81b6b9bca1199bf9aba1491c57100, ASSIGN}] 2023-07-17 11:15:28,943 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b3d81b6b9bca1199bf9aba1491c57100, ASSIGN 2023-07-17 11:15:28,943 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d99053af68b38d1be4d57c4204835cd, ASSIGN 2023-07-17 11:15:28,943 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02d5263e5d9e0b092b7e5800d7ceb3de, ASSIGN 2023-07-17 11:15:28,943 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=15921b7ea0c5590ea84f52463bdee0cd, ASSIGN 2023-07-17 11:15:28,944 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8cf4bf7273740a80332d73b8051471eb, ASSIGN 2023-07-17 11:15:28,944 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b3d81b6b9bca1199bf9aba1491c57100, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:28,944 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02d5263e5d9e0b092b7e5800d7ceb3de, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:28,944 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=15921b7ea0c5590ea84f52463bdee0cd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:28,944 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d99053af68b38d1be4d57c4204835cd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39617,1689592505673; forceNewPlan=false, retain=false 2023-07-17 11:15:28,945 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8cf4bf7273740a80332d73b8051471eb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40489,1689592505619; forceNewPlan=false, retain=false 2023-07-17 11:15:29,035 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-17 11:15:29,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-17 11:15:29,094 INFO [jenkins-hbase4:38451] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-17 11:15:29,099 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=15921b7ea0c5590ea84f52463bdee0cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,099 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=4d99053af68b38d1be4d57c4204835cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:29,099 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529099"}]},"ts":"1689592529099"} 2023-07-17 11:15:29,099 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=8cf4bf7273740a80332d73b8051471eb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,099 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=02d5263e5d9e0b092b7e5800d7ceb3de, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:29,099 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529099"}]},"ts":"1689592529099"} 2023-07-17 11:15:29,100 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529099"}]},"ts":"1689592529099"} 2023-07-17 11:15:29,099 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529099"}]},"ts":"1689592529099"} 2023-07-17 11:15:29,099 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=b3d81b6b9bca1199bf9aba1491c57100, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,100 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529099"}]},"ts":"1689592529099"} 2023-07-17 11:15:29,102 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=130, state=RUNNABLE; OpenRegionProcedure 4d99053af68b38d1be4d57c4204835cd, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:29,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=127, state=RUNNABLE; OpenRegionProcedure 8cf4bf7273740a80332d73b8051471eb, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:29,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=129, state=RUNNABLE; OpenRegionProcedure 02d5263e5d9e0b092b7e5800d7ceb3de, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:29,107 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=128, state=RUNNABLE; OpenRegionProcedure 15921b7ea0c5590ea84f52463bdee0cd, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:29,109 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=131, state=RUNNABLE; OpenRegionProcedure b3d81b6b9bca1199bf9aba1491c57100, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:29,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:29,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4d99053af68b38d1be4d57c4204835cd, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-17 11:15:29,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:29,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,262 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:29,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3d81b6b9bca1199bf9aba1491c57100, NAME => 'Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-17 11:15:29,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:29,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,264 INFO [StoreOpener-4d99053af68b38d1be4d57c4204835cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,264 INFO [StoreOpener-b3d81b6b9bca1199bf9aba1491c57100-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,265 DEBUG [StoreOpener-4d99053af68b38d1be4d57c4204835cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/f 2023-07-17 11:15:29,265 DEBUG [StoreOpener-4d99053af68b38d1be4d57c4204835cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/f 2023-07-17 11:15:29,266 DEBUG [StoreOpener-b3d81b6b9bca1199bf9aba1491c57100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/f 2023-07-17 11:15:29,266 DEBUG [StoreOpener-b3d81b6b9bca1199bf9aba1491c57100-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/f 2023-07-17 11:15:29,266 INFO [StoreOpener-4d99053af68b38d1be4d57c4204835cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4d99053af68b38d1be4d57c4204835cd columnFamilyName f 2023-07-17 11:15:29,266 INFO [StoreOpener-b3d81b6b9bca1199bf9aba1491c57100-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3d81b6b9bca1199bf9aba1491c57100 columnFamilyName f 2023-07-17 11:15:29,267 INFO [StoreOpener-4d99053af68b38d1be4d57c4204835cd-1] regionserver.HStore(310): Store=4d99053af68b38d1be4d57c4204835cd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:29,267 INFO [StoreOpener-b3d81b6b9bca1199bf9aba1491c57100-1] regionserver.HStore(310): Store=b3d81b6b9bca1199bf9aba1491c57100/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:29,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:29,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:29,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3d81b6b9bca1199bf9aba1491c57100; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10040027360, jitterRate=-0.06494958698749542}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:29,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3d81b6b9bca1199bf9aba1491c57100: 2023-07-17 11:15:29,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4d99053af68b38d1be4d57c4204835cd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11431474400, jitterRate=0.06463901698589325}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:29,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4d99053af68b38d1be4d57c4204835cd: 2023-07-17 11:15:29,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100., pid=136, masterSystemTime=1689592529259 2023-07-17 11:15:29,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd., pid=132, masterSystemTime=1689592529256 2023-07-17 11:15:29,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:29,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:29,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:29,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8cf4bf7273740a80332d73b8051471eb, NAME => 'Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-17 11:15:29,278 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=b3d81b6b9bca1199bf9aba1491c57100, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,278 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529278"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592529278"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592529278"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592529278"}]},"ts":"1689592529278"} 2023-07-17 11:15:29,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:29,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:29,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:29,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 02d5263e5d9e0b092b7e5800d7ceb3de, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-17 11:15:29,279 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=4d99053af68b38d1be4d57c4204835cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:29,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,279 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529279"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592529279"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592529279"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592529279"}]},"ts":"1689592529279"} 2023-07-17 11:15:29,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:29,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:29,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,281 INFO [StoreOpener-02d5263e5d9e0b092b7e5800d7ceb3de-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,281 INFO [StoreOpener-8cf4bf7273740a80332d73b8051471eb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,281 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=131 2023-07-17 11:15:29,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=131, state=SUCCESS; OpenRegionProcedure b3d81b6b9bca1199bf9aba1491c57100, server=jenkins-hbase4.apache.org,40489,1689592505619 in 171 msec 2023-07-17 11:15:29,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b3d81b6b9bca1199bf9aba1491c57100, ASSIGN in 343 msec 2023-07-17 11:15:29,283 DEBUG [StoreOpener-8cf4bf7273740a80332d73b8051471eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/f 2023-07-17 11:15:29,283 DEBUG [StoreOpener-02d5263e5d9e0b092b7e5800d7ceb3de-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/f 2023-07-17 11:15:29,283 DEBUG [StoreOpener-02d5263e5d9e0b092b7e5800d7ceb3de-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/f 2023-07-17 11:15:29,283 DEBUG [StoreOpener-8cf4bf7273740a80332d73b8051471eb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/f 2023-07-17 11:15:29,284 INFO [StoreOpener-02d5263e5d9e0b092b7e5800d7ceb3de-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 02d5263e5d9e0b092b7e5800d7ceb3de columnFamilyName f 2023-07-17 11:15:29,284 INFO [StoreOpener-8cf4bf7273740a80332d73b8051471eb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8cf4bf7273740a80332d73b8051471eb columnFamilyName f 2023-07-17 11:15:29,284 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=130 2023-07-17 11:15:29,284 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=130, state=SUCCESS; OpenRegionProcedure 4d99053af68b38d1be4d57c4204835cd, server=jenkins-hbase4.apache.org,39617,1689592505673 in 180 msec 2023-07-17 11:15:29,285 INFO [StoreOpener-8cf4bf7273740a80332d73b8051471eb-1] regionserver.HStore(310): Store=8cf4bf7273740a80332d73b8051471eb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:29,285 INFO [StoreOpener-02d5263e5d9e0b092b7e5800d7ceb3de-1] regionserver.HStore(310): Store=02d5263e5d9e0b092b7e5800d7ceb3de/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:29,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,286 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d99053af68b38d1be4d57c4204835cd, ASSIGN in 346 msec 2023-07-17 11:15:29,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:29,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 02d5263e5d9e0b092b7e5800d7ceb3de; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11100596640, jitterRate=0.03382362425327301}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:29,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 02d5263e5d9e0b092b7e5800d7ceb3de: 2023-07-17 11:15:29,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:29,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de., pid=134, masterSystemTime=1689592529256 2023-07-17 11:15:29,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8cf4bf7273740a80332d73b8051471eb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10778314080, jitterRate=0.003808721899986267}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:29,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8cf4bf7273740a80332d73b8051471eb: 2023-07-17 11:15:29,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb., pid=133, masterSystemTime=1689592529259 2023-07-17 11:15:29,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:29,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:29,295 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=02d5263e5d9e0b092b7e5800d7ceb3de, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:29,295 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529295"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592529295"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592529295"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592529295"}]},"ts":"1689592529295"} 2023-07-17 11:15:29,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:29,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:29,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:29,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 15921b7ea0c5590ea84f52463bdee0cd, NAME => 'Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-17 11:15:29,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:29,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,297 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=8cf4bf7273740a80332d73b8051471eb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,297 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529296"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592529296"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592529296"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592529296"}]},"ts":"1689592529296"} 2023-07-17 11:15:29,297 INFO [StoreOpener-15921b7ea0c5590ea84f52463bdee0cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=129 2023-07-17 11:15:29,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; OpenRegionProcedure 02d5263e5d9e0b092b7e5800d7ceb3de, server=jenkins-hbase4.apache.org,39617,1689592505673 in 191 msec 2023-07-17 11:15:29,299 DEBUG [StoreOpener-15921b7ea0c5590ea84f52463bdee0cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/f 2023-07-17 11:15:29,299 DEBUG [StoreOpener-15921b7ea0c5590ea84f52463bdee0cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/f 2023-07-17 11:15:29,299 INFO [StoreOpener-15921b7ea0c5590ea84f52463bdee0cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 15921b7ea0c5590ea84f52463bdee0cd columnFamilyName f 2023-07-17 11:15:29,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02d5263e5d9e0b092b7e5800d7ceb3de, ASSIGN in 360 msec 2023-07-17 11:15:29,300 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=127 2023-07-17 11:15:29,300 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=127, state=SUCCESS; OpenRegionProcedure 8cf4bf7273740a80332d73b8051471eb, server=jenkins-hbase4.apache.org,40489,1689592505619 in 194 msec 2023-07-17 11:15:29,300 INFO [StoreOpener-15921b7ea0c5590ea84f52463bdee0cd-1] regionserver.HStore(310): Store=15921b7ea0c5590ea84f52463bdee0cd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:29,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,301 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8cf4bf7273740a80332d73b8051471eb, ASSIGN in 362 msec 2023-07-17 11:15:29,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:29,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 15921b7ea0c5590ea84f52463bdee0cd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10555498400, jitterRate=-0.01694260537624359}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:29,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 15921b7ea0c5590ea84f52463bdee0cd: 2023-07-17 11:15:29,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd., pid=135, masterSystemTime=1689592529259 2023-07-17 11:15:29,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:29,309 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:29,310 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=15921b7ea0c5590ea84f52463bdee0cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,311 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529310"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592529310"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592529310"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592529310"}]},"ts":"1689592529310"} 2023-07-17 11:15:29,314 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=128 2023-07-17 11:15:29,314 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=128, state=SUCCESS; OpenRegionProcedure 15921b7ea0c5590ea84f52463bdee0cd, server=jenkins-hbase4.apache.org,40489,1689592505619 in 206 msec 2023-07-17 11:15:29,317 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-17 11:15:29,317 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=15921b7ea0c5590ea84f52463bdee0cd, ASSIGN in 376 msec 2023-07-17 11:15:29,318 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:29,318 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592529318"}]},"ts":"1689592529318"} 2023-07-17 11:15:29,320 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-17 11:15:29,323 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:29,324 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 561 msec 2023-07-17 11:15:29,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-17 11:15:29,371 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 126 completed 2023-07-17 11:15:29,371 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-17 11:15:29,372 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:29,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-17 11:15:29,375 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:29,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-17 11:15:29,376 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:29,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-17 11:15:29,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:29,383 INFO [Listener at localhost/45539] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-17 11:15:29,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-17 11:15:29,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=137, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:29,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-17 11:15:29,388 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592529388"}]},"ts":"1689592529388"} 2023-07-17 11:15:29,390 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-17 11:15:29,391 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-17 11:15:29,392 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8cf4bf7273740a80332d73b8051471eb, UNASSIGN}, {pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=15921b7ea0c5590ea84f52463bdee0cd, UNASSIGN}, {pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02d5263e5d9e0b092b7e5800d7ceb3de, UNASSIGN}, {pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d99053af68b38d1be4d57c4204835cd, UNASSIGN}, {pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b3d81b6b9bca1199bf9aba1491c57100, UNASSIGN}] 2023-07-17 11:15:29,394 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=15921b7ea0c5590ea84f52463bdee0cd, UNASSIGN 2023-07-17 11:15:29,395 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8cf4bf7273740a80332d73b8051471eb, UNASSIGN 2023-07-17 11:15:29,395 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d99053af68b38d1be4d57c4204835cd, UNASSIGN 2023-07-17 11:15:29,395 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b3d81b6b9bca1199bf9aba1491c57100, UNASSIGN 2023-07-17 11:15:29,395 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02d5263e5d9e0b092b7e5800d7ceb3de, UNASSIGN 2023-07-17 11:15:29,398 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=15921b7ea0c5590ea84f52463bdee0cd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,398 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=8cf4bf7273740a80332d73b8051471eb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,398 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529398"}]},"ts":"1689592529398"} 2023-07-17 11:15:29,398 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=b3d81b6b9bca1199bf9aba1491c57100, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,398 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=02d5263e5d9e0b092b7e5800d7ceb3de, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:29,398 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=4d99053af68b38d1be4d57c4204835cd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:29,398 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529398"}]},"ts":"1689592529398"} 2023-07-17 11:15:29,398 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529398"}]},"ts":"1689592529398"} 2023-07-17 11:15:29,398 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529398"}]},"ts":"1689592529398"} 2023-07-17 11:15:29,398 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592529398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592529398"}]},"ts":"1689592529398"} 2023-07-17 11:15:29,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=139, state=RUNNABLE; CloseRegionProcedure 15921b7ea0c5590ea84f52463bdee0cd, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:29,400 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=140, state=RUNNABLE; CloseRegionProcedure 02d5263e5d9e0b092b7e5800d7ceb3de, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:29,401 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=141, state=RUNNABLE; CloseRegionProcedure 4d99053af68b38d1be4d57c4204835cd, server=jenkins-hbase4.apache.org,39617,1689592505673}] 2023-07-17 11:15:29,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=142, state=RUNNABLE; CloseRegionProcedure b3d81b6b9bca1199bf9aba1491c57100, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:29,403 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=138, state=RUNNABLE; CloseRegionProcedure 8cf4bf7273740a80332d73b8051471eb, server=jenkins-hbase4.apache.org,40489,1689592505619}] 2023-07-17 11:15:29,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-17 11:15:29,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 02d5263e5d9e0b092b7e5800d7ceb3de, disabling compactions & flushes 2023-07-17 11:15:29,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3d81b6b9bca1199bf9aba1491c57100, disabling compactions & flushes 2023-07-17 11:15:29,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:29,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:29,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:29,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:29,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. after waiting 0 ms 2023-07-17 11:15:29,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:29,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. after waiting 0 ms 2023-07-17 11:15:29,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:29,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:29,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:29,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100. 2023-07-17 11:15:29,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de. 2023-07-17 11:15:29,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3d81b6b9bca1199bf9aba1491c57100: 2023-07-17 11:15:29,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 02d5263e5d9e0b092b7e5800d7ceb3de: 2023-07-17 11:15:29,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 15921b7ea0c5590ea84f52463bdee0cd, disabling compactions & flushes 2023-07-17 11:15:29,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:29,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:29,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. after waiting 0 ms 2023-07-17 11:15:29,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:29,562 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=b3d81b6b9bca1199bf9aba1491c57100, regionState=CLOSED 2023-07-17 11:15:29,562 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529562"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592529562"}]},"ts":"1689592529562"} 2023-07-17 11:15:29,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4d99053af68b38d1be4d57c4204835cd, disabling compactions & flushes 2023-07-17 11:15:29,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:29,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:29,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. after waiting 0 ms 2023-07-17 11:15:29,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:29,563 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=02d5263e5d9e0b092b7e5800d7ceb3de, regionState=CLOSED 2023-07-17 11:15:29,564 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592529563"}]},"ts":"1689592529563"} 2023-07-17 11:15:29,567 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=142 2023-07-17 11:15:29,567 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=142, state=SUCCESS; CloseRegionProcedure b3d81b6b9bca1199bf9aba1491c57100, server=jenkins-hbase4.apache.org,40489,1689592505619 in 163 msec 2023-07-17 11:15:29,567 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=140 2023-07-17 11:15:29,567 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; CloseRegionProcedure 02d5263e5d9e0b092b7e5800d7ceb3de, server=jenkins-hbase4.apache.org,39617,1689592505673 in 165 msec 2023-07-17 11:15:29,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:29,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:29,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd. 2023-07-17 11:15:29,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 15921b7ea0c5590ea84f52463bdee0cd: 2023-07-17 11:15:29,569 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b3d81b6b9bca1199bf9aba1491c57100, UNASSIGN in 175 msec 2023-07-17 11:15:29,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd. 2023-07-17 11:15:29,569 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=02d5263e5d9e0b092b7e5800d7ceb3de, UNASSIGN in 175 msec 2023-07-17 11:15:29,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4d99053af68b38d1be4d57c4204835cd: 2023-07-17 11:15:29,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8cf4bf7273740a80332d73b8051471eb, disabling compactions & flushes 2023-07-17 11:15:29,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:29,571 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=15921b7ea0c5590ea84f52463bdee0cd, regionState=CLOSED 2023-07-17 11:15:29,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:29,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. after waiting 0 ms 2023-07-17 11:15:29,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:29,571 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529571"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592529571"}]},"ts":"1689592529571"} 2023-07-17 11:15:29,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,572 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=4d99053af68b38d1be4d57c4204835cd, regionState=CLOSED 2023-07-17 11:15:29,572 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689592529572"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592529572"}]},"ts":"1689592529572"} 2023-07-17 11:15:29,575 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=139 2023-07-17 11:15:29,575 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=139, state=SUCCESS; CloseRegionProcedure 15921b7ea0c5590ea84f52463bdee0cd, server=jenkins-hbase4.apache.org,40489,1689592505619 in 173 msec 2023-07-17 11:15:29,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:29,575 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=141 2023-07-17 11:15:29,575 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=141, state=SUCCESS; CloseRegionProcedure 4d99053af68b38d1be4d57c4204835cd, server=jenkins-hbase4.apache.org,39617,1689592505673 in 172 msec 2023-07-17 11:15:29,576 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb. 2023-07-17 11:15:29,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=15921b7ea0c5590ea84f52463bdee0cd, UNASSIGN in 183 msec 2023-07-17 11:15:29,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8cf4bf7273740a80332d73b8051471eb: 2023-07-17 11:15:29,576 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d99053af68b38d1be4d57c4204835cd, UNASSIGN in 183 msec 2023-07-17 11:15:29,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,577 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=8cf4bf7273740a80332d73b8051471eb, regionState=CLOSED 2023-07-17 11:15:29,577 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689592529577"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592529577"}]},"ts":"1689592529577"} 2023-07-17 11:15:29,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=138 2023-07-17 11:15:29,580 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=138, state=SUCCESS; CloseRegionProcedure 8cf4bf7273740a80332d73b8051471eb, server=jenkins-hbase4.apache.org,40489,1689592505619 in 175 msec 2023-07-17 11:15:29,581 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=137 2023-07-17 11:15:29,581 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8cf4bf7273740a80332d73b8051471eb, UNASSIGN in 188 msec 2023-07-17 11:15:29,582 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592529582"}]},"ts":"1689592529582"} 2023-07-17 11:15:29,584 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-17 11:15:29,586 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-17 11:15:29,589 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 204 msec 2023-07-17 11:15:29,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-17 11:15:29,689 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 137 completed 2023-07-17 11:15:29,690 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_38972067 2023-07-17 11:15:29,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_38972067 2023-07-17 11:15:29,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_38972067 2023-07-17 11:15:29,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:29,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:29,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-17 11:15:29,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_38972067, current retry=0 2023-07-17 11:15:29,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_38972067. 2023-07-17 11:15:29,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:29,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:29,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:29,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-17 11:15:29,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:29,704 INFO [Listener at localhost/45539] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-17 11:15:29,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-17 11:15:29,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:29,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 918 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:36004 deadline: 1689592589704, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-17 11:15:29,705 DEBUG [Listener at localhost/45539] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-17 11:15:29,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-17 11:15:29,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:29,708 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:29,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_38972067' 2023-07-17 11:15:29,709 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=149, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:29,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_38972067 2023-07-17 11:15:29,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:29,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:29,715 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,715 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,715 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,715 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,715 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-17 11:15:29,717 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/recovered.edits] 2023-07-17 11:15:29,718 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/recovered.edits] 2023-07-17 11:15:29,718 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/recovered.edits] 2023-07-17 11:15:29,718 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/recovered.edits] 2023-07-17 11:15:29,718 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/f, FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/recovered.edits] 2023-07-17 11:15:29,726 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb/recovered.edits/4.seqid 2023-07-17 11:15:29,726 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd/recovered.edits/4.seqid 2023-07-17 11:15:29,727 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd/recovered.edits/4.seqid 2023-07-17 11:15:29,727 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/8cf4bf7273740a80332d73b8051471eb 2023-07-17 11:15:29,727 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de/recovered.edits/4.seqid 2023-07-17 11:15:29,727 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/4d99053af68b38d1be4d57c4204835cd 2023-07-17 11:15:29,727 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/15921b7ea0c5590ea84f52463bdee0cd 2023-07-17 11:15:29,728 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/recovered.edits/4.seqid to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/archive/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100/recovered.edits/4.seqid 2023-07-17 11:15:29,728 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/02d5263e5d9e0b092b7e5800d7ceb3de 2023-07-17 11:15:29,728 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/.tmp/data/default/Group_testDisabledTableMove/b3d81b6b9bca1199bf9aba1491c57100 2023-07-17 11:15:29,728 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-17 11:15:29,731 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=149, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:29,732 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-17 11:15:29,737 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-17 11:15:29,738 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=149, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:29,738 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-17 11:15:29,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592529738"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:29,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592529738"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:29,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592529738"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:29,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592529738"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:29,738 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592529738"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:29,739 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-17 11:15:29,740 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8cf4bf7273740a80332d73b8051471eb, NAME => 'Group_testDisabledTableMove,,1689592528761.8cf4bf7273740a80332d73b8051471eb.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 15921b7ea0c5590ea84f52463bdee0cd, NAME => 'Group_testDisabledTableMove,aaaaa,1689592528761.15921b7ea0c5590ea84f52463bdee0cd.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 02d5263e5d9e0b092b7e5800d7ceb3de, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689592528761.02d5263e5d9e0b092b7e5800d7ceb3de.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 4d99053af68b38d1be4d57c4204835cd, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689592528761.4d99053af68b38d1be4d57c4204835cd.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => b3d81b6b9bca1199bf9aba1491c57100, NAME => 'Group_testDisabledTableMove,zzzzz,1689592528761.b3d81b6b9bca1199bf9aba1491c57100.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-17 11:15:29,740 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-17 11:15:29,740 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592529740"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:29,741 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-17 11:15:29,743 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=149, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-17 11:15:29,744 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 37 msec 2023-07-17 11:15:29,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-17 11:15:29,818 INFO [Listener at localhost/45539] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 149 completed 2023-07-17 11:15:29,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:29,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:29,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:29,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:29,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:29,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:35719] to rsgroup default 2023-07-17 11:15:29,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_38972067 2023-07-17 11:15:29,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:29,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:29,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_38972067, current retry=0 2023-07-17 11:15:29,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35719,1689592509057, jenkins-hbase4.apache.org,37409,1689592505527] are moved back to Group_testDisabledTableMove_38972067 2023-07-17 11:15:29,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_38972067 => default 2023-07-17 11:15:29,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:29,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_38972067 2023-07-17 11:15:29,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:29,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:29,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:29,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:29,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:29,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:29,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:29,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:29,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:29,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:29,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:29,842 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:29,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:29,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:29,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:29,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:29,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:29,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:29,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:29,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:29,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 952 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593729851, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:29,852 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:29,853 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:29,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:29,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:29,854 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:29,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:29,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:29,874 INFO [Listener at localhost/45539] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=504 (was 501) Potentially hanging thread: hconnection-0x2c378da6-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_84750054_17 at /127.0.0.1:51792 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=770 (was 743) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=484 (was 484), ProcessCount=172 (was 172), AvailableMemoryMB=2939 (was 3043) 2023-07-17 11:15:29,874 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-17 11:15:29,891 INFO [Listener at localhost/45539] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=504, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=484, ProcessCount=172, AvailableMemoryMB=2938 2023-07-17 11:15:29,891 WARN [Listener at localhost/45539] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-17 11:15:29,891 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-17 11:15:29,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:29,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:29,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:29,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:29,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:29,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:29,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:29,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:29,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:29,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:29,905 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:29,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:29,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:29,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:29,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:29,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:29,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:29,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:29,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38451] to rsgroup master 2023-07-17 11:15:29,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:29,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] ipc.CallRunner(144): callId: 980 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36004 deadline: 1689593729914, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. 2023-07-17 11:15:29,915 WARN [Listener at localhost/45539] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38451 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:29,916 INFO [Listener at localhost/45539] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:29,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:29,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:29,917 INFO [Listener at localhost/45539] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35719, jenkins-hbase4.apache.org:37409, jenkins-hbase4.apache.org:39617, jenkins-hbase4.apache.org:40489], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:29,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:29,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38451] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:29,918 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-17 11:15:29,918 INFO [Listener at localhost/45539] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-17 11:15:29,918 DEBUG [Listener at localhost/45539] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x62c69654 to 127.0.0.1:49750 2023-07-17 11:15:29,918 DEBUG [Listener at localhost/45539] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:29,920 DEBUG [Listener at localhost/45539] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-17 11:15:29,920 DEBUG [Listener at localhost/45539] util.JVMClusterUtil(257): Found active master hash=80805066, stopped=false 2023-07-17 11:15:29,920 DEBUG [Listener at localhost/45539] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 11:15:29,920 DEBUG [Listener at localhost/45539] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 11:15:29,920 INFO [Listener at localhost/45539] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:29,923 INFO [Listener at localhost/45539] procedure2.ProcedureExecutor(629): Stopping 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:29,923 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3db4d6ad to 127.0.0.1:49750 2023-07-17 11:15:29,923 DEBUG [Listener at localhost/45539] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:29,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:29,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:29,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:29,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:29,924 INFO [Listener at localhost/45539] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37409,1689592505527' ***** 2023-07-17 11:15:29,924 INFO [Listener at localhost/45539] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:29,924 INFO [Listener at localhost/45539] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40489,1689592505619' ***** 2023-07-17 11:15:29,924 INFO [Listener at localhost/45539] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:29,924 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:29,925 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:29,925 INFO [Listener at localhost/45539] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39617,1689592505673' ***** 2023-07-17 11:15:29,925 INFO [Listener at localhost/45539] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:29,925 INFO [Listener at localhost/45539] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35719,1689592509057' ***** 2023-07-17 11:15:29,925 INFO [Listener at localhost/45539] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:29,925 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:29,925 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:29,939 INFO [RS:2;jenkins-hbase4:39617] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@470fdab8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:29,939 INFO [RS:3;jenkins-hbase4:35719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@526688c5{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:29,939 INFO [RS:1;jenkins-hbase4:40489] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@66df3ef2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:29,939 INFO [RS:0;jenkins-hbase4:37409] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e98cdce{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:29,944 INFO [RS:2;jenkins-hbase4:39617] server.AbstractConnector(383): Stopped ServerConnector@78d2fac4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:29,944 INFO [RS:0;jenkins-hbase4:37409] server.AbstractConnector(383): Stopped ServerConnector@6d447d66{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:29,944 INFO [RS:2;jenkins-hbase4:39617] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:29,944 INFO [RS:1;jenkins-hbase4:40489] server.AbstractConnector(383): Stopped ServerConnector@44d8fb02{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:29,944 INFO [RS:3;jenkins-hbase4:35719] server.AbstractConnector(383): Stopped ServerConnector@55286062{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:29,945 INFO [RS:2;jenkins-hbase4:39617] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7c1f78c1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:29,944 INFO [RS:1;jenkins-hbase4:40489] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:29,944 INFO [RS:0;jenkins-hbase4:37409] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:29,945 INFO [RS:2;jenkins-hbase4:39617] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7605c194{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:29,945 INFO [RS:3;jenkins-hbase4:35719] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:29,947 INFO [RS:0;jenkins-hbase4:37409] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ce589e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:29,946 INFO [RS:1;jenkins-hbase4:40489] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@35b16dd4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:29,948 INFO [RS:3;jenkins-hbase4:35719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@e79409d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:29,948 INFO [RS:1;jenkins-hbase4:40489] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ee296f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:29,948 INFO [RS:0;jenkins-hbase4:37409] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6afee7fb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:29,949 INFO [RS:3;jenkins-hbase4:35719] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39ac2a37{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:29,951 INFO [RS:0;jenkins-hbase4:37409] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:29,951 INFO [RS:2;jenkins-hbase4:39617] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:29,951 INFO [RS:1;jenkins-hbase4:40489] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:29,951 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:29,951 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:29,952 INFO [RS:1;jenkins-hbase4:40489] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:29,952 INFO [RS:3;jenkins-hbase4:35719] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:29,952 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:29,952 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:29,952 INFO [RS:1;jenkins-hbase4:40489] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:29,952 INFO [RS:0;jenkins-hbase4:37409] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:29,952 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(3305): Received CLOSE for 6a4a58dee597d7e2caeeea613b990689 2023-07-17 11:15:29,952 INFO [RS:2;jenkins-hbase4:39617] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:29,952 INFO [RS:3;jenkins-hbase4:35719] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:29,952 INFO [RS:2;jenkins-hbase4:39617] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:29,952 INFO [RS:0;jenkins-hbase4:37409] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:29,953 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(3305): Received CLOSE for 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:29,953 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:29,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6a4a58dee597d7e2caeeea613b990689, disabling compactions & flushes 2023-07-17 11:15:29,953 DEBUG [RS:0;jenkins-hbase4:37409] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x014dbef2 to 127.0.0.1:49750 2023-07-17 11:15:29,953 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(3305): Received CLOSE for d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:29,952 INFO [RS:3;jenkins-hbase4:35719] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:29,954 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(3305): Received CLOSE for 21527a315e64c88028dc354e9a834764 2023-07-17 11:15:29,954 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:29,953 DEBUG [RS:0;jenkins-hbase4:37409] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:29,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b74521ed4637f75fb35cc5495c946be, disabling compactions & flushes 2023-07-17 11:15:29,954 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37409,1689592505527; all regions closed. 2023-07-17 11:15:29,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:29,953 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:29,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:29,954 DEBUG [RS:2;jenkins-hbase4:39617] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4a766cb7 to 127.0.0.1:49750 2023-07-17 11:15:29,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:29,954 DEBUG [RS:3;jenkins-hbase4:35719] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0eff2867 to 127.0.0.1:49750 2023-07-17 11:15:29,954 DEBUG [RS:3;jenkins-hbase4:35719] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:29,954 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35719,1689592509057; all regions closed. 2023-07-17 11:15:29,954 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:29,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:29,954 DEBUG [RS:2;jenkins-hbase4:39617] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:29,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. after waiting 0 ms 2023-07-17 11:15:29,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. after waiting 0 ms 2023-07-17 11:15:29,955 DEBUG [RS:1;jenkins-hbase4:40489] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3ee96208 to 127.0.0.1:49750 2023-07-17 11:15:29,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:29,955 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 11:15:29,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:29,955 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6a4a58dee597d7e2caeeea613b990689 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-17 11:15:29,955 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1478): Online Regions={2b74521ed4637f75fb35cc5495c946be=testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be.} 2023-07-17 11:15:29,955 DEBUG [RS:1;jenkins-hbase4:40489] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:29,956 INFO [RS:1;jenkins-hbase4:40489] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:29,956 INFO [RS:1;jenkins-hbase4:40489] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:29,956 INFO [RS:1;jenkins-hbase4:40489] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:29,956 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-17 11:15:29,956 DEBUG [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1504): Waiting on 2b74521ed4637f75fb35cc5495c946be 2023-07-17 11:15:29,956 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-17 11:15:29,957 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1478): Online Regions={6a4a58dee597d7e2caeeea613b990689=hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689., 1588230740=hbase:meta,,1.1588230740, d5111e6d7162bf03312675d4d0d3f80c=hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c., 21527a315e64c88028dc354e9a834764=unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764.} 2023-07-17 11:15:29,957 DEBUG [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1504): Waiting on 1588230740, 21527a315e64c88028dc354e9a834764, 6a4a58dee597d7e2caeeea613b990689, d5111e6d7162bf03312675d4d0d3f80c 2023-07-17 11:15:29,957 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 11:15:29,957 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 11:15:29,958 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 11:15:29,958 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 11:15:29,958 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 11:15:29,958 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.90 KB heapSize=122.84 KB 2023-07-17 11:15:29,965 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,35719,1689592509057/jenkins-hbase4.apache.org%2C35719%2C1689592509057.1689592509306 not finished, retry = 0 2023-07-17 11:15:29,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/testRename/2b74521ed4637f75fb35cc5495c946be/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-17 11:15:29,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:29,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b74521ed4637f75fb35cc5495c946be: 2023-07-17 11:15:29,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689592523142.2b74521ed4637f75fb35cc5495c946be. 2023-07-17 11:15:29,981 DEBUG [RS:0;jenkins-hbase4:37409] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs 2023-07-17 11:15:29,981 INFO [RS:0;jenkins-hbase4:37409] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37409%2C1689592505527:(num 1689592507657) 2023-07-17 11:15:29,981 DEBUG [RS:0;jenkins-hbase4:37409] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:29,982 INFO [RS:0;jenkins-hbase4:37409] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:29,982 INFO [RS:0;jenkins-hbase4:37409] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:29,986 INFO [RS:0;jenkins-hbase4:37409] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:29,986 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:29,987 INFO [RS:0;jenkins-hbase4:37409] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:29,987 INFO [RS:0;jenkins-hbase4:37409] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:29,988 INFO [RS:0;jenkins-hbase4:37409] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37409 2023-07-17 11:15:29,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/.tmp/info/2d6ba2d4d926425ea4b47a9a6de608af 2023-07-17 11:15:30,010 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:30,015 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.92 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/.tmp/info/352beadcab6c4328b641d72236d3cf44 2023-07-17 11:15:30,018 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:30,018 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:30,019 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:30,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/.tmp/info/2d6ba2d4d926425ea4b47a9a6de608af as hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/info/2d6ba2d4d926425ea4b47a9a6de608af 2023-07-17 11:15:30,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 352beadcab6c4328b641d72236d3cf44 2023-07-17 11:15:30,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/info/2d6ba2d4d926425ea4b47a9a6de608af, entries=2, sequenceid=6, filesize=4.8 K 2023-07-17 11:15:30,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6a4a58dee597d7e2caeeea613b990689 in 79ms, sequenceid=6, compaction requested=false 2023-07-17 11:15:30,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/namespace/6a4a58dee597d7e2caeeea613b990689/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-17 11:15:30,044 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:30,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6a4a58dee597d7e2caeeea613b990689: 2023-07-17 11:15:30,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689592508087.6a4a58dee597d7e2caeeea613b990689. 2023-07-17 11:15:30,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d5111e6d7162bf03312675d4d0d3f80c, disabling compactions & flushes 2023-07-17 11:15:30,044 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:30,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:30,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. after waiting 0 ms 2023-07-17 11:15:30,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:30,044 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d5111e6d7162bf03312675d4d0d3f80c 1/1 column families, dataSize=22.06 KB heapSize=36.52 KB 2023-07-17 11:15:30,046 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/.tmp/rep_barrier/f7f7badc591b4322bf3a4ea035a552d2 2023-07-17 11:15:30,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7f7badc591b4322bf3a4ea035a552d2 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37409,1689592505527 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,065 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,067 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37409,1689592505527] 2023-07-17 11:15:30,067 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37409,1689592505527; numProcessing=1 2023-07-17 11:15:30,069 DEBUG [RS:3;jenkins-hbase4:35719] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs 2023-07-17 11:15:30,069 INFO [RS:3;jenkins-hbase4:35719] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35719%2C1689592509057:(num 1689592509306) 2023-07-17 11:15:30,069 DEBUG [RS:3;jenkins-hbase4:35719] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:30,069 INFO [RS:3;jenkins-hbase4:35719] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:30,070 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37409,1689592505527 already deleted, retry=false 2023-07-17 11:15:30,070 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37409,1689592505527 expired; onlineServers=3 2023-07-17 11:15:30,071 INFO [RS:3;jenkins-hbase4:35719] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:30,072 INFO [RS:3;jenkins-hbase4:35719] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:30,072 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:30,072 INFO [RS:3;jenkins-hbase4:35719] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:30,072 INFO [RS:3;jenkins-hbase4:35719] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:30,073 INFO [RS:3;jenkins-hbase4:35719] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35719 2023-07-17 11:15:30,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.06 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/.tmp/m/b1611edff4534f51b9cfb1382dd0d4e3 2023-07-17 11:15:30,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b1611edff4534f51b9cfb1382dd0d4e3 2023-07-17 11:15:30,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/.tmp/m/b1611edff4534f51b9cfb1382dd0d4e3 as hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m/b1611edff4534f51b9cfb1382dd0d4e3 2023-07-17 11:15:30,083 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/.tmp/table/9cbd0b3f681b41b59f258d26dc53a615 2023-07-17 11:15:30,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b1611edff4534f51b9cfb1382dd0d4e3 2023-07-17 11:15:30,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/m/b1611edff4534f51b9cfb1382dd0d4e3, entries=22, sequenceid=101, filesize=5.9 K 2023-07-17 11:15:30,089 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9cbd0b3f681b41b59f258d26dc53a615 2023-07-17 11:15:30,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.06 KB/22588, heapSize ~36.51 KB/37384, currentSize=0 B/0 for d5111e6d7162bf03312675d4d0d3f80c in 45ms, sequenceid=101, compaction requested=false 2023-07-17 11:15:30,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-17 11:15:30,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/.tmp/info/352beadcab6c4328b641d72236d3cf44 as hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/info/352beadcab6c4328b641d72236d3cf44 2023-07-17 11:15:30,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/rsgroup/d5111e6d7162bf03312675d4d0d3f80c/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-17 11:15:30,103 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:30,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:30,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d5111e6d7162bf03312675d4d0d3f80c: 2023-07-17 11:15:30,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689592508323.d5111e6d7162bf03312675d4d0d3f80c. 2023-07-17 11:15:30,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21527a315e64c88028dc354e9a834764, disabling compactions & flushes 2023-07-17 11:15:30,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:30,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:30,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. after waiting 0 ms 2023-07-17 11:15:30,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:30,106 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 352beadcab6c4328b641d72236d3cf44 2023-07-17 11:15:30,107 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/info/352beadcab6c4328b641d72236d3cf44, entries=97, sequenceid=200, filesize=15.9 K 2023-07-17 11:15:30,107 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/.tmp/rep_barrier/f7f7badc591b4322bf3a4ea035a552d2 as hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/rep_barrier/f7f7badc591b4322bf3a4ea035a552d2 2023-07-17 11:15:30,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/default/unmovedTable/21527a315e64c88028dc354e9a834764/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-17 11:15:30,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:30,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21527a315e64c88028dc354e9a834764: 2023-07-17 11:15:30,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689592524805.21527a315e64c88028dc354e9a834764. 2023-07-17 11:15:30,115 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7f7badc591b4322bf3a4ea035a552d2 2023-07-17 11:15:30,115 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/rep_barrier/f7f7badc591b4322bf3a4ea035a552d2, entries=18, sequenceid=200, filesize=6.9 K 2023-07-17 11:15:30,116 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/.tmp/table/9cbd0b3f681b41b59f258d26dc53a615 as hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/table/9cbd0b3f681b41b59f258d26dc53a615 2023-07-17 11:15:30,122 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9cbd0b3f681b41b59f258d26dc53a615 2023-07-17 11:15:30,123 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/table/9cbd0b3f681b41b59f258d26dc53a615, entries=31, sequenceid=200, filesize=7.4 K 2023-07-17 11:15:30,123 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.90 KB/79773, heapSize ~122.79 KB/125736, currentSize=0 B/0 for 1588230740 in 165ms, sequenceid=200, compaction requested=false 2023-07-17 11:15:30,131 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/data/hbase/meta/1588230740/recovered.edits/203.seqid, newMaxSeqId=203, maxSeqId=1 2023-07-17 11:15:30,131 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:30,132 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:30,132 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 11:15:30,132 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:30,156 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39617,1689592505673; all regions closed. 2023-07-17 11:15:30,157 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40489,1689592505619; all regions closed. 2023-07-17 11:15:30,159 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/WALs/jenkins-hbase4.apache.org,39617,1689592505673/jenkins-hbase4.apache.org%2C39617%2C1689592505673.1689592507657 not finished, retry = 0 2023-07-17 11:15:30,165 DEBUG [RS:1;jenkins-hbase4:40489] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs 2023-07-17 11:15:30,165 INFO [RS:1;jenkins-hbase4:40489] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40489%2C1689592505619.meta:.meta(num 1689592507807) 2023-07-17 11:15:30,168 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,168 INFO [RS:0;jenkins-hbase4:37409] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37409,1689592505527; zookeeper connection closed. 2023-07-17 11:15:30,168 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:37409-0x10172fe1c5e0001, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,168 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@19ec9bcf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@19ec9bcf 2023-07-17 11:15:30,169 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:30,169 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:30,169 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,169 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35719,1689592509057 2023-07-17 11:15:30,170 DEBUG [RS:1;jenkins-hbase4:40489] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs 2023-07-17 11:15:30,170 INFO [RS:1;jenkins-hbase4:40489] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40489%2C1689592505619:(num 1689592507657) 2023-07-17 11:15:30,170 DEBUG [RS:1;jenkins-hbase4:40489] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:30,170 INFO [RS:1;jenkins-hbase4:40489] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:30,171 INFO [RS:1;jenkins-hbase4:40489] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:30,171 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35719,1689592509057] 2023-07-17 11:15:30,171 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:30,171 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35719,1689592509057; numProcessing=2 2023-07-17 11:15:30,171 INFO [RS:1;jenkins-hbase4:40489] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40489 2023-07-17 11:15:30,173 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35719,1689592509057 already deleted, retry=false 2023-07-17 11:15:30,173 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35719,1689592509057 expired; onlineServers=2 2023-07-17 11:15:30,177 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:30,178 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40489,1689592505619 2023-07-17 11:15:30,178 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,179 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40489,1689592505619] 2023-07-17 11:15:30,179 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40489,1689592505619; numProcessing=3 2023-07-17 11:15:30,181 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40489,1689592505619 already deleted, retry=false 2023-07-17 11:15:30,181 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40489,1689592505619 expired; onlineServers=1 2023-07-17 11:15:30,262 DEBUG [RS:2;jenkins-hbase4:39617] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/oldWALs 2023-07-17 11:15:30,263 INFO [RS:2;jenkins-hbase4:39617] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39617%2C1689592505673:(num 1689592507657) 2023-07-17 11:15:30,263 DEBUG [RS:2;jenkins-hbase4:39617] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:30,263 INFO [RS:2;jenkins-hbase4:39617] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:30,263 INFO [RS:2;jenkins-hbase4:39617] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:30,263 INFO [RS:2;jenkins-hbase4:39617] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:30,263 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:30,263 INFO [RS:2;jenkins-hbase4:39617] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:30,263 INFO [RS:2;jenkins-hbase4:39617] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:30,264 INFO [RS:2;jenkins-hbase4:39617] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39617 2023-07-17 11:15:30,268 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39617,1689592505673 2023-07-17 11:15:30,268 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:30,269 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39617,1689592505673] 2023-07-17 11:15:30,269 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39617,1689592505673; numProcessing=4 2023-07-17 11:15:30,271 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39617,1689592505673 already deleted, retry=false 2023-07-17 11:15:30,271 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39617,1689592505673 expired; onlineServers=0 2023-07-17 11:15:30,271 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38451,1689592503576' ***** 2023-07-17 11:15:30,271 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-17 11:15:30,271 DEBUG [M:0;jenkins-hbase4:38451] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45edf6f1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:30,272 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:30,274 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:30,274 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:30,274 INFO [M:0;jenkins-hbase4:38451] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@64480317{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 11:15:30,274 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:30,274 INFO [M:0;jenkins-hbase4:38451] server.AbstractConnector(383): Stopped ServerConnector@71df00d8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:30,274 INFO [M:0;jenkins-hbase4:38451] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:30,275 INFO [M:0;jenkins-hbase4:38451] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5320c268{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:30,276 INFO [M:0;jenkins-hbase4:38451] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7a39ade6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:30,276 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38451,1689592503576 2023-07-17 11:15:30,276 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38451,1689592503576; all regions closed. 2023-07-17 11:15:30,276 DEBUG [M:0;jenkins-hbase4:38451] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:30,276 INFO [M:0;jenkins-hbase4:38451] master.HMaster(1491): Stopping master jetty server 2023-07-17 11:15:30,277 INFO [M:0;jenkins-hbase4:38451] server.AbstractConnector(383): Stopped ServerConnector@2490749c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:30,277 DEBUG [M:0;jenkins-hbase4:38451] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-17 11:15:30,277 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-17 11:15:30,277 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592507220] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592507220,5,FailOnTimeoutGroup] 2023-07-17 11:15:30,277 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592507220] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592507220,5,FailOnTimeoutGroup] 2023-07-17 11:15:30,277 DEBUG [M:0;jenkins-hbase4:38451] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-17 11:15:30,278 INFO [M:0;jenkins-hbase4:38451] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-17 11:15:30,278 INFO [M:0;jenkins-hbase4:38451] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-17 11:15:30,278 INFO [M:0;jenkins-hbase4:38451] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-17 11:15:30,278 DEBUG [M:0;jenkins-hbase4:38451] master.HMaster(1512): Stopping service threads 2023-07-17 11:15:30,278 INFO [M:0;jenkins-hbase4:38451] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-17 11:15:30,278 ERROR [M:0;jenkins-hbase4:38451] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-17 11:15:30,279 INFO [M:0;jenkins-hbase4:38451] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-17 11:15:30,279 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-17 11:15:30,283 DEBUG [M:0;jenkins-hbase4:38451] zookeeper.ZKUtil(398): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-17 11:15:30,283 WARN [M:0;jenkins-hbase4:38451] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-17 11:15:30,283 INFO [M:0;jenkins-hbase4:38451] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-17 11:15:30,283 INFO [M:0;jenkins-hbase4:38451] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-17 11:15:30,283 DEBUG [M:0;jenkins-hbase4:38451] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 11:15:30,283 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:30,283 DEBUG [M:0;jenkins-hbase4:38451] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:30,283 DEBUG [M:0;jenkins-hbase4:38451] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 11:15:30,283 DEBUG [M:0;jenkins-hbase4:38451] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:30,283 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=500.01 KB heapSize=598 KB 2023-07-17 11:15:30,298 INFO [M:0;jenkins-hbase4:38451] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=500.01 KB at sequenceid=1104 (bloomFilter=true), to=hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3dc3b115d7814ffba11df67cd9a58112 2023-07-17 11:15:30,305 DEBUG [M:0;jenkins-hbase4:38451] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3dc3b115d7814ffba11df67cd9a58112 as hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3dc3b115d7814ffba11df67cd9a58112 2023-07-17 11:15:30,310 INFO [M:0;jenkins-hbase4:38451] regionserver.HStore(1080): Added hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3dc3b115d7814ffba11df67cd9a58112, entries=148, sequenceid=1104, filesize=26.2 K 2023-07-17 11:15:30,311 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegion(2948): Finished flush of dataSize ~500.01 KB/512008, heapSize ~597.98 KB/612336, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=1104, compaction requested=false 2023-07-17 11:15:30,314 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:30,314 DEBUG [M:0;jenkins-hbase4:38451] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:30,319 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:30,320 INFO [M:0;jenkins-hbase4:38451] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-17 11:15:30,320 INFO [M:0;jenkins-hbase4:38451] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38451 2023-07-17 11:15:30,323 DEBUG [M:0;jenkins-hbase4:38451] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38451,1689592503576 already deleted, retry=false 2023-07-17 11:15:30,522 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,522 INFO [M:0;jenkins-hbase4:38451] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38451,1689592503576; zookeeper connection closed. 2023-07-17 11:15:30,522 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): master:38451-0x10172fe1c5e0000, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,622 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,622 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:39617-0x10172fe1c5e0003, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,622 INFO [RS:2;jenkins-hbase4:39617] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39617,1689592505673; zookeeper connection closed. 2023-07-17 11:15:30,623 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4be13d9b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4be13d9b 2023-07-17 11:15:30,722 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,722 INFO [RS:1;jenkins-hbase4:40489] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40489,1689592505619; zookeeper connection closed. 2023-07-17 11:15:30,722 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:40489-0x10172fe1c5e0002, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,723 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@29dfe513] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@29dfe513 2023-07-17 11:15:30,823 INFO [RS:3;jenkins-hbase4:35719] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35719,1689592509057; zookeeper connection closed. 2023-07-17 11:15:30,823 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@189fc071] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@189fc071 2023-07-17 11:15:30,823 INFO [Listener at localhost/45539] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-17 11:15:30,823 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,823 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): regionserver:35719-0x10172fe1c5e000b, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:30,824 WARN [Listener at localhost/45539] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:30,827 INFO [Listener at localhost/45539] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:30,933 WARN [BP-1649377864-172.31.14.131-1689592499733 heartbeating to localhost/127.0.0.1:41739] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:30,934 WARN [BP-1649377864-172.31.14.131-1689592499733 heartbeating to localhost/127.0.0.1:41739] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1649377864-172.31.14.131-1689592499733 (Datanode Uuid a55b6569-fe9c-446c-b28f-3252356494e1) service to localhost/127.0.0.1:41739 2023-07-17 11:15:30,936 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/dfs/data/data5/current/BP-1649377864-172.31.14.131-1689592499733] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:30,936 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/dfs/data/data6/current/BP-1649377864-172.31.14.131-1689592499733] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:30,939 WARN [Listener at localhost/45539] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:30,944 INFO [Listener at localhost/45539] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:31,047 WARN [BP-1649377864-172.31.14.131-1689592499733 heartbeating to localhost/127.0.0.1:41739] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:31,047 WARN [BP-1649377864-172.31.14.131-1689592499733 heartbeating to localhost/127.0.0.1:41739] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1649377864-172.31.14.131-1689592499733 (Datanode Uuid ae5d8d85-83ef-4ff2-aba9-e83817d5c969) service to localhost/127.0.0.1:41739 2023-07-17 11:15:31,048 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/dfs/data/data3/current/BP-1649377864-172.31.14.131-1689592499733] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:31,048 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/dfs/data/data4/current/BP-1649377864-172.31.14.131-1689592499733] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:31,050 WARN [Listener at localhost/45539] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:31,054 INFO [Listener at localhost/45539] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:31,157 WARN [BP-1649377864-172.31.14.131-1689592499733 heartbeating to localhost/127.0.0.1:41739] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:31,157 WARN [BP-1649377864-172.31.14.131-1689592499733 heartbeating to localhost/127.0.0.1:41739] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1649377864-172.31.14.131-1689592499733 (Datanode Uuid d231aabc-76cd-4220-8780-d6431b350fef) service to localhost/127.0.0.1:41739 2023-07-17 11:15:31,158 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/dfs/data/data1/current/BP-1649377864-172.31.14.131-1689592499733] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:31,158 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/cluster_8935f0ee-6b8c-6a1e-47f3-fe4545550a67/dfs/data/data2/current/BP-1649377864-172.31.14.131-1689592499733] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:31,191 INFO [Listener at localhost/45539] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:31,313 INFO [Listener at localhost/45539] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-17 11:15:31,373 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-17 11:15:31,373 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.log.dir so I do NOT create it in target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b3748a71-8fb9-11d9-80d5-27b001236be5/hadoop.tmp.dir so I do NOT create it in target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e, deleteOnExit=true 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/test.cache.data in system properties and HBase conf 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.tmp.dir in system properties and HBase conf 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir in system properties and HBase conf 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-17 11:15:31,374 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-17 11:15:31,375 DEBUG [Listener at localhost/45539] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-17 11:15:31,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-17 11:15:31,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-17 11:15:31,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-17 11:15:31,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 11:15:31,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-17 11:15:31,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-17 11:15:31,375 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 11:15:31,376 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 11:15:31,376 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-17 11:15:31,376 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/nfs.dump.dir in system properties and HBase conf 2023-07-17 11:15:31,376 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/java.io.tmpdir in system properties and HBase conf 2023-07-17 11:15:31,376 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 11:15:31,376 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-17 11:15:31,376 INFO [Listener at localhost/45539] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-17 11:15:31,381 WARN [Listener at localhost/45539] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 11:15:31,381 WARN [Listener at localhost/45539] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 11:15:31,409 DEBUG [Listener at localhost/45539-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10172fe1c5e000a, quorum=127.0.0.1:49750, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-17 11:15:31,409 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10172fe1c5e000a, quorum=127.0.0.1:49750, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-17 11:15:31,422 WARN [Listener at localhost/45539] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:31,424 INFO [Listener at localhost/45539] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:31,430 INFO [Listener at localhost/45539] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/java.io.tmpdir/Jetty_localhost_36879_hdfs____twof41/webapp 2023-07-17 11:15:31,524 INFO [Listener at localhost/45539] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36879 2023-07-17 11:15:31,529 WARN [Listener at localhost/45539] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 11:15:31,529 WARN [Listener at localhost/45539] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 11:15:31,573 WARN [Listener at localhost/35063] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:31,583 WARN [Listener at localhost/35063] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:31,585 WARN [Listener at localhost/35063] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:31,586 INFO [Listener at localhost/35063] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:31,592 INFO [Listener at localhost/35063] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/java.io.tmpdir/Jetty_localhost_44669_datanode____z4e82o/webapp 2023-07-17 11:15:31,688 INFO [Listener at localhost/35063] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44669 2023-07-17 11:15:31,694 WARN [Listener at localhost/43141] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:31,713 WARN [Listener at localhost/43141] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:31,715 WARN [Listener at localhost/43141] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:31,716 INFO [Listener at localhost/43141] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:31,720 INFO [Listener at localhost/43141] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/java.io.tmpdir/Jetty_localhost_38851_datanode____4zb4xi/webapp 2023-07-17 11:15:31,807 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x102599064b0cc7a: Processing first storage report for DS-b2123e07-7285-417a-a73e-7dfcf35893e7 from datanode b3edc182-dd96-4c03-9a1d-664ad1190729 2023-07-17 11:15:31,807 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x102599064b0cc7a: from storage DS-b2123e07-7285-417a-a73e-7dfcf35893e7 node DatanodeRegistration(127.0.0.1:40397, datanodeUuid=b3edc182-dd96-4c03-9a1d-664ad1190729, infoPort=38285, infoSecurePort=0, ipcPort=43141, storageInfo=lv=-57;cid=testClusterID;nsid=1785685089;c=1689592531384), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:31,808 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x102599064b0cc7a: Processing first storage report for DS-758adbc7-6372-4f5c-b566-56229aeafadf from datanode b3edc182-dd96-4c03-9a1d-664ad1190729 2023-07-17 11:15:31,808 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x102599064b0cc7a: from storage DS-758adbc7-6372-4f5c-b566-56229aeafadf node DatanodeRegistration(127.0.0.1:40397, datanodeUuid=b3edc182-dd96-4c03-9a1d-664ad1190729, infoPort=38285, infoSecurePort=0, ipcPort=43141, storageInfo=lv=-57;cid=testClusterID;nsid=1785685089;c=1689592531384), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:31,839 INFO [Listener at localhost/43141] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38851 2023-07-17 11:15:31,846 WARN [Listener at localhost/45113] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:31,887 WARN [Listener at localhost/45113] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:31,890 WARN [Listener at localhost/45113] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:31,892 INFO [Listener at localhost/45113] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:31,908 INFO [Listener at localhost/45113] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/java.io.tmpdir/Jetty_localhost_37071_datanode____ac5j4d/webapp 2023-07-17 11:15:31,986 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa710d37d59d3e3f2: Processing first storage report for DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2 from datanode 0d4290b9-ba5a-4798-beb0-42d71536f5b6 2023-07-17 11:15:31,987 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa710d37d59d3e3f2: from storage DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2 node DatanodeRegistration(127.0.0.1:34553, datanodeUuid=0d4290b9-ba5a-4798-beb0-42d71536f5b6, infoPort=40113, infoSecurePort=0, ipcPort=45113, storageInfo=lv=-57;cid=testClusterID;nsid=1785685089;c=1689592531384), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-17 11:15:31,987 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa710d37d59d3e3f2: Processing first storage report for DS-425b2507-5725-4c7e-8f39-5db5cbc3bb08 from datanode 0d4290b9-ba5a-4798-beb0-42d71536f5b6 2023-07-17 11:15:31,987 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa710d37d59d3e3f2: from storage DS-425b2507-5725-4c7e-8f39-5db5cbc3bb08 node DatanodeRegistration(127.0.0.1:34553, datanodeUuid=0d4290b9-ba5a-4798-beb0-42d71536f5b6, infoPort=40113, infoSecurePort=0, ipcPort=45113, storageInfo=lv=-57;cid=testClusterID;nsid=1785685089;c=1689592531384), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:32,026 INFO [Listener at localhost/45113] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37071 2023-07-17 11:15:32,034 WARN [Listener at localhost/40211] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:32,151 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x810e75f3b8ef28bd: Processing first storage report for DS-86a49bd0-8bb8-4437-b67a-7e6e91743623 from datanode 47173409-90b9-4afb-99c9-4b078d9f2377 2023-07-17 11:15:32,151 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x810e75f3b8ef28bd: from storage DS-86a49bd0-8bb8-4437-b67a-7e6e91743623 node DatanodeRegistration(127.0.0.1:45893, datanodeUuid=47173409-90b9-4afb-99c9-4b078d9f2377, infoPort=42971, infoSecurePort=0, ipcPort=40211, storageInfo=lv=-57;cid=testClusterID;nsid=1785685089;c=1689592531384), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:32,151 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x810e75f3b8ef28bd: Processing first storage report for DS-05dc7545-80d0-4b7e-b321-a0a948a28452 from datanode 47173409-90b9-4afb-99c9-4b078d9f2377 2023-07-17 11:15:32,151 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x810e75f3b8ef28bd: from storage DS-05dc7545-80d0-4b7e-b321-a0a948a28452 node DatanodeRegistration(127.0.0.1:45893, datanodeUuid=47173409-90b9-4afb-99c9-4b078d9f2377, infoPort=42971, infoSecurePort=0, ipcPort=40211, storageInfo=lv=-57;cid=testClusterID;nsid=1785685089;c=1689592531384), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:32,245 DEBUG [Listener at localhost/40211] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2 2023-07-17 11:15:32,250 INFO [Listener at localhost/40211] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/zookeeper_0, clientPort=60132, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-17 11:15:32,252 INFO [Listener at localhost/40211] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60132 2023-07-17 11:15:32,252 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,253 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,291 INFO [Listener at localhost/40211] util.FSUtils(471): Created version file at hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504 with version=8 2023-07-17 11:15:32,291 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/hbase-staging 2023-07-17 11:15:32,292 DEBUG [Listener at localhost/40211] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-17 11:15:32,292 DEBUG [Listener at localhost/40211] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-17 11:15:32,292 DEBUG [Listener at localhost/40211] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-17 11:15:32,293 DEBUG [Listener at localhost/40211] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-17 11:15:32,294 INFO [Listener at localhost/40211] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:32,294 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,294 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,294 INFO [Listener at localhost/40211] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:32,294 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,295 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:32,295 INFO [Listener at localhost/40211] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:32,304 INFO [Listener at localhost/40211] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39741 2023-07-17 11:15:32,305 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,306 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,307 INFO [Listener at localhost/40211] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39741 connecting to ZooKeeper ensemble=127.0.0.1:60132 2023-07-17 11:15:32,317 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:397410x0, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:32,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39741-0x10172fe901c0000 connected 2023-07-17 11:15:32,334 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:32,335 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:32,335 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:32,336 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39741 2023-07-17 11:15:32,336 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39741 2023-07-17 11:15:32,338 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39741 2023-07-17 11:15:32,340 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39741 2023-07-17 11:15:32,340 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39741 2023-07-17 11:15:32,342 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:32,343 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:32,343 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:32,343 INFO [Listener at localhost/40211] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-17 11:15:32,343 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:32,344 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:32,344 INFO [Listener at localhost/40211] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:32,344 INFO [Listener at localhost/40211] http.HttpServer(1146): Jetty bound to port 37883 2023-07-17 11:15:32,344 INFO [Listener at localhost/40211] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:32,348 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,349 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@681a326f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:32,349 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,350 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68104050{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:32,361 INFO [Listener at localhost/40211] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:32,363 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:32,363 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:32,363 INFO [Listener at localhost/40211] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 11:15:32,365 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,366 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@67a95324{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 11:15:32,368 INFO [Listener at localhost/40211] server.AbstractConnector(333): Started ServerConnector@31bc4bc7{HTTP/1.1, (http/1.1)}{0.0.0.0:37883} 2023-07-17 11:15:32,368 INFO [Listener at localhost/40211] server.Server(415): Started @34766ms 2023-07-17 11:15:32,368 INFO [Listener at localhost/40211] master.HMaster(444): hbase.rootdir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504, hbase.cluster.distributed=false 2023-07-17 11:15:32,385 INFO [Listener at localhost/40211] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:32,385 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,385 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,385 INFO [Listener at localhost/40211] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:32,385 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,385 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:32,385 INFO [Listener at localhost/40211] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:32,387 INFO [Listener at localhost/40211] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35593 2023-07-17 11:15:32,387 INFO [Listener at localhost/40211] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:32,388 DEBUG [Listener at localhost/40211] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:32,389 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,390 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,391 INFO [Listener at localhost/40211] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35593 connecting to ZooKeeper ensemble=127.0.0.1:60132 2023-07-17 11:15:32,395 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:355930x0, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:32,396 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35593-0x10172fe901c0001 connected 2023-07-17 11:15:32,396 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:32,397 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:32,398 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:32,398 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35593 2023-07-17 11:15:32,399 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35593 2023-07-17 11:15:32,399 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35593 2023-07-17 11:15:32,399 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35593 2023-07-17 11:15:32,400 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35593 2023-07-17 11:15:32,402 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:32,402 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:32,402 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:32,403 INFO [Listener at localhost/40211] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:32,403 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:32,403 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:32,403 INFO [Listener at localhost/40211] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:32,405 INFO [Listener at localhost/40211] http.HttpServer(1146): Jetty bound to port 41999 2023-07-17 11:15:32,405 INFO [Listener at localhost/40211] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:32,410 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,410 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@28d28178{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:32,410 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,411 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4e470c74{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:32,417 INFO [Listener at localhost/40211] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:32,418 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:32,418 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:32,418 INFO [Listener at localhost/40211] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 11:15:32,420 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,420 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@564ec33e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:32,423 INFO [Listener at localhost/40211] server.AbstractConnector(333): Started ServerConnector@636fde24{HTTP/1.1, (http/1.1)}{0.0.0.0:41999} 2023-07-17 11:15:32,423 INFO [Listener at localhost/40211] server.Server(415): Started @34822ms 2023-07-17 11:15:32,436 INFO [Listener at localhost/40211] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:32,437 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,437 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,437 INFO [Listener at localhost/40211] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:32,437 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,437 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:32,437 INFO [Listener at localhost/40211] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:32,440 INFO [Listener at localhost/40211] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44865 2023-07-17 11:15:32,440 INFO [Listener at localhost/40211] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:32,443 DEBUG [Listener at localhost/40211] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:32,444 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,445 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,446 INFO [Listener at localhost/40211] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44865 connecting to ZooKeeper ensemble=127.0.0.1:60132 2023-07-17 11:15:32,449 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:448650x0, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:32,450 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:448650x0, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:32,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44865-0x10172fe901c0002 connected 2023-07-17 11:15:32,451 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:32,452 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:32,453 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44865 2023-07-17 11:15:32,453 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44865 2023-07-17 11:15:32,453 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44865 2023-07-17 11:15:32,454 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44865 2023-07-17 11:15:32,455 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44865 2023-07-17 11:15:32,457 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:32,457 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:32,457 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:32,458 INFO [Listener at localhost/40211] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:32,458 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:32,458 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:32,459 INFO [Listener at localhost/40211] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:32,459 INFO [Listener at localhost/40211] http.HttpServer(1146): Jetty bound to port 44377 2023-07-17 11:15:32,460 INFO [Listener at localhost/40211] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:32,463 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,463 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@e86671b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:32,464 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,464 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@384b9383{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:32,469 INFO [Listener at localhost/40211] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:32,471 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:32,471 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:32,471 INFO [Listener at localhost/40211] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 11:15:32,475 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,476 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@21829d82{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:32,477 INFO [Listener at localhost/40211] server.AbstractConnector(333): Started ServerConnector@2433db5c{HTTP/1.1, (http/1.1)}{0.0.0.0:44377} 2023-07-17 11:15:32,477 INFO [Listener at localhost/40211] server.Server(415): Started @34876ms 2023-07-17 11:15:32,489 INFO [Listener at localhost/40211] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:32,489 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,489 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,489 INFO [Listener at localhost/40211] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:32,489 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:32,489 INFO [Listener at localhost/40211] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:32,489 INFO [Listener at localhost/40211] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:32,490 INFO [Listener at localhost/40211] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36481 2023-07-17 11:15:32,491 INFO [Listener at localhost/40211] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:32,492 DEBUG [Listener at localhost/40211] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:32,492 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,493 INFO [Listener at localhost/40211] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,494 INFO [Listener at localhost/40211] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36481 connecting to ZooKeeper ensemble=127.0.0.1:60132 2023-07-17 11:15:32,498 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:364810x0, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:32,499 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:364810x0, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:32,499 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36481-0x10172fe901c0003 connected 2023-07-17 11:15:32,500 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:32,501 DEBUG [Listener at localhost/40211] zookeeper.ZKUtil(164): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:32,503 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36481 2023-07-17 11:15:32,503 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36481 2023-07-17 11:15:32,503 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36481 2023-07-17 11:15:32,503 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36481 2023-07-17 11:15:32,504 DEBUG [Listener at localhost/40211] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36481 2023-07-17 11:15:32,506 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:32,506 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:32,506 INFO [Listener at localhost/40211] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:32,507 INFO [Listener at localhost/40211] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:32,507 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:32,507 INFO [Listener at localhost/40211] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:32,507 INFO [Listener at localhost/40211] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:32,508 INFO [Listener at localhost/40211] http.HttpServer(1146): Jetty bound to port 39875 2023-07-17 11:15:32,508 INFO [Listener at localhost/40211] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:32,519 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,519 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@507250bb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:32,520 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,520 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@535e2d7a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:32,525 INFO [Listener at localhost/40211] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:32,526 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:32,527 INFO [Listener at localhost/40211] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:32,527 INFO [Listener at localhost/40211] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 11:15:32,535 INFO [Listener at localhost/40211] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:32,536 INFO [Listener at localhost/40211] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@52ea318a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:32,538 INFO [Listener at localhost/40211] server.AbstractConnector(333): Started ServerConnector@1fc4706d{HTTP/1.1, (http/1.1)}{0.0.0.0:39875} 2023-07-17 11:15:32,538 INFO [Listener at localhost/40211] server.Server(415): Started @34937ms 2023-07-17 11:15:32,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:32,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@fb521b{HTTP/1.1, (http/1.1)}{0.0.0.0:38029} 2023-07-17 11:15:32,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @34950ms 2023-07-17 11:15:32,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:32,553 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 11:15:32,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:32,555 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:32,555 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:32,555 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:32,555 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:32,557 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:32,558 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:32,560 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,39741,1689592532293 from backup master directory 2023-07-17 11:15:32,560 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:32,561 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:32,561 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:32,561 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 11:15:32,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:32,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/hbase.id with ID: 69f5593a-4d04-491c-acaa-dff613d93b2f 2023-07-17 11:15:32,598 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:32,602 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:32,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x71d6e237 to 127.0.0.1:60132 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:32,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4921d063, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:32,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:32,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-17 11:15:32,621 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:32,622 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store-tmp 2023-07-17 11:15:32,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:32,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 11:15:32,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:32,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:32,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 11:15:32,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:32,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:32,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:32,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/WALs/jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:32,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39741%2C1689592532293, suffix=, logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/WALs/jenkins-hbase4.apache.org,39741,1689592532293, archiveDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/oldWALs, maxLogs=10 2023-07-17 11:15:32,658 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK] 2023-07-17 11:15:32,659 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK] 2023-07-17 11:15:32,660 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK] 2023-07-17 11:15:32,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/WALs/jenkins-hbase4.apache.org,39741,1689592532293/jenkins-hbase4.apache.org%2C39741%2C1689592532293.1689592532642 2023-07-17 11:15:32,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK], DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK], DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK]] 2023-07-17 11:15:32,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:32,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:32,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:32,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:32,673 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:32,675 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-17 11:15:32,675 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-17 11:15:32,676 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:32,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:32,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:32,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:32,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:32,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10446781600, jitterRate=-0.027067646384239197}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:32,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:32,689 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-17 11:15:32,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-17 11:15:32,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-17 11:15:32,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-17 11:15:32,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-17 11:15:32,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-17 11:15:32,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-17 11:15:32,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-17 11:15:32,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-17 11:15:32,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-17 11:15:32,694 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-17 11:15:32,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-17 11:15:32,697 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:32,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-17 11:15:32,698 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-17 11:15:32,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-17 11:15:32,702 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:32,702 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:32,702 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:32,702 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:32,702 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:32,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,39741,1689592532293, sessionid=0x10172fe901c0000, setting cluster-up flag (Was=false) 2023-07-17 11:15:32,712 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:32,718 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-17 11:15:32,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:32,722 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:32,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-17 11:15:32,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:32,729 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.hbase-snapshot/.tmp 2023-07-17 11:15:32,732 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-17 11:15:32,732 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-17 11:15:32,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-17 11:15:32,734 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:32,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-17 11:15:32,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-17 11:15:32,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:32,740 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(951): ClusterId : 69f5593a-4d04-491c-acaa-dff613d93b2f 2023-07-17 11:15:32,740 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:32,741 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:32,741 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:32,743 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(951): ClusterId : 69f5593a-4d04-491c-acaa-dff613d93b2f 2023-07-17 11:15:32,743 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:32,744 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:32,746 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:32,746 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:32,749 DEBUG [RS:0;jenkins-hbase4:35593] zookeeper.ReadOnlyZKClient(139): Connect 0x39cce16e to 127.0.0.1:60132 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:32,750 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(951): ClusterId : 69f5593a-4d04-491c-acaa-dff613d93b2f 2023-07-17 11:15:32,750 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:32,750 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:32,752 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:32,752 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:32,753 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:32,755 DEBUG [RS:1;jenkins-hbase4:44865] zookeeper.ReadOnlyZKClient(139): Connect 0x491dfe2d to 127.0.0.1:60132 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:32,755 DEBUG [RS:2;jenkins-hbase4:36481] zookeeper.ReadOnlyZKClient(139): Connect 0x76c3fb05 to 127.0.0.1:60132 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:32,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 11:15:32,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 11:15:32,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 11:15:32,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 11:15:32,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:32,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:32,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:32,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:32,786 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-17 11:15:32,787 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,787 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:32,787 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,809 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689592562809 2023-07-17 11:15:32,810 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-17 11:15:32,810 DEBUG [RS:1;jenkins-hbase4:44865] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47da197e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:32,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-17 11:15:32,811 DEBUG [RS:0;jenkins-hbase4:35593] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33348a1e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:32,811 DEBUG [RS:1;jenkins-hbase4:44865] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43e5955d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:32,810 DEBUG [RS:2;jenkins-hbase4:36481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78041bad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:32,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-17 11:15:32,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-17 11:15:32,811 DEBUG [RS:2;jenkins-hbase4:36481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c3b9075, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:32,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-17 11:15:32,811 DEBUG [RS:0;jenkins-hbase4:35593] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7636121c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:32,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-17 11:15:32,812 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:32,812 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-17 11:15:32,813 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:32,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-17 11:15:32,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-17 11:15:32,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-17 11:15:32,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-17 11:15:32,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-17 11:15:32,822 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592532821,5,FailOnTimeoutGroup] 2023-07-17 11:15:32,826 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592532825,5,FailOnTimeoutGroup] 2023-07-17 11:15:32,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,833 DEBUG [RS:0;jenkins-hbase4:35593] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35593 2023-07-17 11:15:32,833 INFO [RS:0;jenkins-hbase4:35593] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:32,833 INFO [RS:0;jenkins-hbase4:35593] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:32,833 DEBUG [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:32,834 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39741,1689592532293 with isa=jenkins-hbase4.apache.org/172.31.14.131:35593, startcode=1689592532384 2023-07-17 11:15:32,834 DEBUG [RS:0;jenkins-hbase4:35593] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:32,835 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:44865 2023-07-17 11:15:32,835 INFO [RS:1;jenkins-hbase4:44865] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:32,835 INFO [RS:1;jenkins-hbase4:44865] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:32,835 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:32,835 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39741,1689592532293 with isa=jenkins-hbase4.apache.org/172.31.14.131:44865, startcode=1689592532436 2023-07-17 11:15:32,836 DEBUG [RS:1;jenkins-hbase4:44865] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:32,837 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36481 2023-07-17 11:15:32,837 INFO [RS:2;jenkins-hbase4:36481] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:32,837 INFO [RS:2;jenkins-hbase4:36481] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:32,838 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:32,838 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,39741,1689592532293 with isa=jenkins-hbase4.apache.org/172.31.14.131:36481, startcode=1689592532488 2023-07-17 11:15:32,838 DEBUG [RS:2;jenkins-hbase4:36481] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:32,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-17 11:15:32,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,845 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46209, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:32,846 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41817, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:32,847 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33407, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:32,851 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39741] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,851 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:32,852 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-17 11:15:32,853 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39741] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,853 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504 2023-07-17 11:15:32,853 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35063 2023-07-17 11:15:32,853 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:32,853 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37883 2023-07-17 11:15:32,854 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-17 11:15:32,854 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39741] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,854 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504 2023-07-17 11:15:32,854 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35063 2023-07-17 11:15:32,854 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37883 2023-07-17 11:15:32,854 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:32,854 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-17 11:15:32,854 DEBUG [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504 2023-07-17 11:15:32,854 DEBUG [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35063 2023-07-17 11:15:32,854 DEBUG [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37883 2023-07-17 11:15:32,855 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:32,861 DEBUG [RS:2;jenkins-hbase4:36481] zookeeper.ZKUtil(162): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,861 WARN [RS:2;jenkins-hbase4:36481] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:32,861 DEBUG [RS:1;jenkins-hbase4:44865] zookeeper.ZKUtil(162): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,862 INFO [RS:2;jenkins-hbase4:36481] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:32,862 WARN [RS:1;jenkins-hbase4:44865] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:32,862 DEBUG [RS:0;jenkins-hbase4:35593] zookeeper.ZKUtil(162): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,862 INFO [RS:1;jenkins-hbase4:44865] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:32,862 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,862 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,862 WARN [RS:0;jenkins-hbase4:35593] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:32,862 INFO [RS:0;jenkins-hbase4:35593] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:32,862 DEBUG [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,863 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35593,1689592532384] 2023-07-17 11:15:32,863 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36481,1689592532488] 2023-07-17 11:15:32,863 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44865,1689592532436] 2023-07-17 11:15:32,873 DEBUG [RS:2;jenkins-hbase4:36481] zookeeper.ZKUtil(162): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,873 DEBUG [RS:1;jenkins-hbase4:44865] zookeeper.ZKUtil(162): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,874 DEBUG [RS:2;jenkins-hbase4:36481] zookeeper.ZKUtil(162): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,874 DEBUG [RS:0;jenkins-hbase4:35593] zookeeper.ZKUtil(162): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,874 DEBUG [RS:1;jenkins-hbase4:44865] zookeeper.ZKUtil(162): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,874 DEBUG [RS:2;jenkins-hbase4:36481] zookeeper.ZKUtil(162): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,875 DEBUG [RS:0;jenkins-hbase4:35593] zookeeper.ZKUtil(162): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,875 DEBUG [RS:0;jenkins-hbase4:35593] zookeeper.ZKUtil(162): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,875 DEBUG [RS:1;jenkins-hbase4:44865] zookeeper.ZKUtil(162): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,876 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:32,876 INFO [RS:2;jenkins-hbase4:36481] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:32,876 DEBUG [RS:0;jenkins-hbase4:35593] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:32,877 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:32,877 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:32,877 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:32,878 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504 2023-07-17 11:15:32,878 INFO [RS:1;jenkins-hbase4:44865] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:32,878 INFO [RS:0;jenkins-hbase4:35593] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:32,878 INFO [RS:2;jenkins-hbase4:36481] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:32,879 INFO [RS:2;jenkins-hbase4:36481] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:32,879 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,879 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:32,882 INFO [RS:1;jenkins-hbase4:44865] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:32,885 INFO [RS:0;jenkins-hbase4:35593] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:32,885 INFO [RS:1;jenkins-hbase4:44865] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:32,886 INFO [RS:0;jenkins-hbase4:35593] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:32,886 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,886 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,886 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:32,887 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:32,887 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,887 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,887 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,887 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,887 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,888 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,888 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:32,888 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,888 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:2;jenkins-hbase4:36481] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,889 DEBUG [RS:1;jenkins-hbase4:44865] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,891 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,891 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,892 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,892 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,894 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,894 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,894 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,894 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,894 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,894 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,894 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,894 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,895 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,895 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,895 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:32,895 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,895 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,895 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,895 DEBUG [RS:0;jenkins-hbase4:35593] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:32,907 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,907 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,907 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,907 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,914 INFO [RS:1;jenkins-hbase4:44865] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:32,914 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44865,1689592532436-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,916 INFO [RS:2;jenkins-hbase4:36481] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:32,916 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36481,1689592532488-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,929 INFO [RS:0;jenkins-hbase4:35593] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:32,930 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35593,1689592532384-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,934 INFO [RS:2;jenkins-hbase4:36481] regionserver.Replication(203): jenkins-hbase4.apache.org,36481,1689592532488 started 2023-07-17 11:15:32,934 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36481,1689592532488, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36481, sessionid=0x10172fe901c0003 2023-07-17 11:15:32,934 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:32,934 DEBUG [RS:2;jenkins-hbase4:36481] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,934 DEBUG [RS:2;jenkins-hbase4:36481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36481,1689592532488' 2023-07-17 11:15:32,934 DEBUG [RS:2;jenkins-hbase4:36481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:32,935 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:32,935 DEBUG [RS:2;jenkins-hbase4:36481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:32,936 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:32,936 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:32,936 DEBUG [RS:2;jenkins-hbase4:36481] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:32,936 DEBUG [RS:2;jenkins-hbase4:36481] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36481,1689592532488' 2023-07-17 11:15:32,936 DEBUG [RS:2;jenkins-hbase4:36481] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:32,937 DEBUG [RS:2;jenkins-hbase4:36481] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:32,937 DEBUG [RS:2;jenkins-hbase4:36481] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:32,937 INFO [RS:2;jenkins-hbase4:36481] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-17 11:15:32,938 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 11:15:32,940 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,941 DEBUG [RS:2;jenkins-hbase4:36481] zookeeper.ZKUtil(398): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-17 11:15:32,941 INFO [RS:2;jenkins-hbase4:36481] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-17 11:15:32,941 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,942 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,944 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/info 2023-07-17 11:15:32,944 INFO [RS:1;jenkins-hbase4:44865] regionserver.Replication(203): jenkins-hbase4.apache.org,44865,1689592532436 started 2023-07-17 11:15:32,944 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44865,1689592532436, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44865, sessionid=0x10172fe901c0002 2023-07-17 11:15:32,944 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:32,944 DEBUG [RS:1;jenkins-hbase4:44865] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,944 DEBUG [RS:1;jenkins-hbase4:44865] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44865,1689592532436' 2023-07-17 11:15:32,944 DEBUG [RS:1;jenkins-hbase4:44865] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:32,944 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 11:15:32,952 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:32,952 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 11:15:32,955 INFO [RS:0;jenkins-hbase4:35593] regionserver.Replication(203): jenkins-hbase4.apache.org,35593,1689592532384 started 2023-07-17 11:15:32,955 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:32,955 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35593,1689592532384, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35593, sessionid=0x10172fe901c0001 2023-07-17 11:15:32,955 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:32,955 DEBUG [RS:0;jenkins-hbase4:35593] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,955 DEBUG [RS:0;jenkins-hbase4:35593] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35593,1689592532384' 2023-07-17 11:15:32,955 DEBUG [RS:0;jenkins-hbase4:35593] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:32,955 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 11:15:32,955 DEBUG [RS:0;jenkins-hbase4:35593] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:32,956 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:32,956 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:32,956 DEBUG [RS:0;jenkins-hbase4:35593] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:32,956 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:32,956 DEBUG [RS:0;jenkins-hbase4:35593] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35593,1689592532384' 2023-07-17 11:15:32,956 DEBUG [RS:0;jenkins-hbase4:35593] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:32,956 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 11:15:32,956 DEBUG [RS:0;jenkins-hbase4:35593] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:32,957 DEBUG [RS:0;jenkins-hbase4:35593] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:32,957 INFO [RS:0;jenkins-hbase4:35593] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-17 11:15:32,957 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,958 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/table 2023-07-17 11:15:32,958 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 11:15:32,959 DEBUG [RS:1;jenkins-hbase4:44865] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:32,960 DEBUG [RS:0;jenkins-hbase4:35593] zookeeper.ZKUtil(398): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-17 11:15:32,960 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:32,960 INFO [RS:0;jenkins-hbase4:35593] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-17 11:15:32,960 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:32,960 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,960 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:32,960 DEBUG [RS:1;jenkins-hbase4:44865] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:32,960 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,960 DEBUG [RS:1;jenkins-hbase4:44865] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44865,1689592532436' 2023-07-17 11:15:32,960 DEBUG [RS:1;jenkins-hbase4:44865] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:32,961 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740 2023-07-17 11:15:32,961 DEBUG [RS:1;jenkins-hbase4:44865] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:32,961 DEBUG [RS:1;jenkins-hbase4:44865] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:32,961 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740 2023-07-17 11:15:32,961 INFO [RS:1;jenkins-hbase4:44865] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-17 11:15:32,961 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,962 DEBUG [RS:1;jenkins-hbase4:44865] zookeeper.ZKUtil(398): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-17 11:15:32,962 INFO [RS:1;jenkins-hbase4:44865] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-17 11:15:32,962 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,962 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:32,964 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 11:15:32,965 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 11:15:32,971 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:32,971 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10925928640, jitterRate=0.01755639910697937}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 11:15:32,971 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 11:15:32,971 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 11:15:32,971 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 11:15:32,971 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 11:15:32,971 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 11:15:32,971 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 11:15:32,972 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:32,972 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 11:15:32,973 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:32,973 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-17 11:15:32,973 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-17 11:15:32,976 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-17 11:15:32,977 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-17 11:15:33,045 INFO [RS:2;jenkins-hbase4:36481] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36481%2C1689592532488, suffix=, logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,36481,1689592532488, archiveDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs, maxLogs=32 2023-07-17 11:15:33,063 INFO [RS:0;jenkins-hbase4:35593] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35593%2C1689592532384, suffix=, logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,35593,1689592532384, archiveDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs, maxLogs=32 2023-07-17 11:15:33,065 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK] 2023-07-17 11:15:33,065 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK] 2023-07-17 11:15:33,065 INFO [RS:1;jenkins-hbase4:44865] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44865%2C1689592532436, suffix=, logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,44865,1689592532436, archiveDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs, maxLogs=32 2023-07-17 11:15:33,065 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK] 2023-07-17 11:15:33,071 INFO [RS:2;jenkins-hbase4:36481] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,36481,1689592532488/jenkins-hbase4.apache.org%2C36481%2C1689592532488.1689592533047 2023-07-17 11:15:33,071 DEBUG [RS:2;jenkins-hbase4:36481] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK], DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK], DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK]] 2023-07-17 11:15:33,083 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK] 2023-07-17 11:15:33,083 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK] 2023-07-17 11:15:33,083 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK] 2023-07-17 11:15:33,083 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK] 2023-07-17 11:15:33,083 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK] 2023-07-17 11:15:33,083 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK] 2023-07-17 11:15:33,085 INFO [RS:1;jenkins-hbase4:44865] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,44865,1689592532436/jenkins-hbase4.apache.org%2C44865%2C1689592532436.1689592533066 2023-07-17 11:15:33,085 INFO [RS:0;jenkins-hbase4:35593] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,35593,1689592532384/jenkins-hbase4.apache.org%2C35593%2C1689592532384.1689592533065 2023-07-17 11:15:33,085 DEBUG [RS:1;jenkins-hbase4:44865] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK], DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK], DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK]] 2023-07-17 11:15:33,086 DEBUG [RS:0;jenkins-hbase4:35593] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK], DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK], DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK]] 2023-07-17 11:15:33,127 DEBUG [jenkins-hbase4:39741] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-17 11:15:33,128 DEBUG [jenkins-hbase4:39741] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:33,128 DEBUG [jenkins-hbase4:39741] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:33,128 DEBUG [jenkins-hbase4:39741] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:33,128 DEBUG [jenkins-hbase4:39741] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:33,128 DEBUG [jenkins-hbase4:39741] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:33,129 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36481,1689592532488, state=OPENING 2023-07-17 11:15:33,130 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-17 11:15:33,132 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:33,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36481,1689592532488}] 2023-07-17 11:15:33,133 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 11:15:33,287 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:33,287 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:33,289 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:33,294 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-17 11:15:33,294 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:33,296 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36481%2C1689592532488.meta, suffix=.meta, logDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,36481,1689592532488, archiveDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs, maxLogs=32 2023-07-17 11:15:33,312 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK] 2023-07-17 11:15:33,312 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK] 2023-07-17 11:15:33,312 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK] 2023-07-17 11:15:33,314 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,36481,1689592532488/jenkins-hbase4.apache.org%2C36481%2C1689592532488.meta.1689592533296.meta 2023-07-17 11:15:33,314 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40397,DS-b2123e07-7285-417a-a73e-7dfcf35893e7,DISK], DatanodeInfoWithStorage[127.0.0.1:45893,DS-86a49bd0-8bb8-4437-b67a-7e6e91743623,DISK], DatanodeInfoWithStorage[127.0.0.1:34553,DS-7f2f10bc-e0ae-418e-83a2-3d189846faa2,DISK]] 2023-07-17 11:15:33,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:33,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 11:15:33,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-17 11:15:33,315 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-17 11:15:33,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-17 11:15:33,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:33,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-17 11:15:33,316 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-17 11:15:33,318 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 11:15:33,319 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/info 2023-07-17 11:15:33,319 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/info 2023-07-17 11:15:33,319 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 11:15:33,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:33,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 11:15:33,321 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:33,321 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:33,322 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 11:15:33,322 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:33,323 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 11:15:33,323 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/table 2023-07-17 11:15:33,324 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/table 2023-07-17 11:15:33,324 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 11:15:33,325 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:33,327 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740 2023-07-17 11:15:33,328 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740 2023-07-17 11:15:33,330 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 11:15:33,332 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 11:15:33,332 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10822951200, jitterRate=0.007965877652168274}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 11:15:33,333 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 11:15:33,333 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689592533287 2023-07-17 11:15:33,338 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-17 11:15:33,338 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-17 11:15:33,339 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36481,1689592532488, state=OPEN 2023-07-17 11:15:33,340 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 11:15:33,340 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 11:15:33,342 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-17 11:15:33,342 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36481,1689592532488 in 207 msec 2023-07-17 11:15:33,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-17 11:15:33,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 369 msec 2023-07-17 11:15:33,345 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 610 msec 2023-07-17 11:15:33,346 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689592533346, completionTime=-1 2023-07-17 11:15:33,346 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-17 11:15:33,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-17 11:15:33,349 DEBUG [hconnection-0x36444b75-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:33,350 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37360, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:33,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-17 11:15:33,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689592593352 2023-07-17 11:15:33,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689592653352 2023-07-17 11:15:33,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-17 11:15:33,355 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39741,1689592532293] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:33,356 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39741,1689592532293] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-17 11:15:33,357 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-17 11:15:33,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39741,1689592532293-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:33,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39741,1689592532293-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:33,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39741,1689592532293-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:33,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:39741, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:33,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:33,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-17 11:15:33,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:33,359 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:33,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-17 11:15:33,360 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:33,360 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-17 11:15:33,361 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:33,361 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:33,362 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,362 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4 empty. 2023-07-17 11:15:33,363 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,363 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-17 11:15:33,363 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,363 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55 empty. 2023-07-17 11:15:33,364 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,364 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-17 11:15:33,395 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:33,396 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:33,396 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => d77a0022a9c08cb516405b45516f40b4, NAME => 'hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp 2023-07-17 11:15:33,398 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a10b405e356fcaddfcbc67928a39fb55, NAME => 'hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp 2023-07-17 11:15:33,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:33,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a10b405e356fcaddfcbc67928a39fb55, disabling compactions & flushes 2023-07-17 11:15:33,418 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:33,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:33,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. after waiting 0 ms 2023-07-17 11:15:33,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:33,418 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:33,418 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:33,418 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a10b405e356fcaddfcbc67928a39fb55: 2023-07-17 11:15:33,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing d77a0022a9c08cb516405b45516f40b4, disabling compactions & flushes 2023-07-17 11:15:33,419 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:33,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:33,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. after waiting 0 ms 2023-07-17 11:15:33,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:33,419 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:33,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for d77a0022a9c08cb516405b45516f40b4: 2023-07-17 11:15:33,421 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:33,421 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:33,422 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592533422"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592533422"}]},"ts":"1689592533422"} 2023-07-17 11:15:33,422 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592533422"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592533422"}]},"ts":"1689592533422"} 2023-07-17 11:15:33,425 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:33,425 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:33,425 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592533425"}]},"ts":"1689592533425"} 2023-07-17 11:15:33,425 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:33,426 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:33,426 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592533426"}]},"ts":"1689592533426"} 2023-07-17 11:15:33,427 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-17 11:15:33,427 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-17 11:15:33,480 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:33,480 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:33,481 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:33,481 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:33,481 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:33,483 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a10b405e356fcaddfcbc67928a39fb55, ASSIGN}] 2023-07-17 11:15:33,486 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a10b405e356fcaddfcbc67928a39fb55, ASSIGN 2023-07-17 11:15:33,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:33,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:33,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:33,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:33,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:33,486 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d77a0022a9c08cb516405b45516f40b4, ASSIGN}] 2023-07-17 11:15:33,487 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a10b405e356fcaddfcbc67928a39fb55, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44865,1689592532436; forceNewPlan=false, retain=false 2023-07-17 11:15:33,487 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=d77a0022a9c08cb516405b45516f40b4, ASSIGN 2023-07-17 11:15:33,488 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=d77a0022a9c08cb516405b45516f40b4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44865,1689592532436; forceNewPlan=false, retain=false 2023-07-17 11:15:33,488 INFO [jenkins-hbase4:39741] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-17 11:15:33,489 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a10b405e356fcaddfcbc67928a39fb55, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:33,489 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592533489"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592533489"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592533489"}]},"ts":"1689592533489"} 2023-07-17 11:15:33,490 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d77a0022a9c08cb516405b45516f40b4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:33,490 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592533490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592533490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592533490"}]},"ts":"1689592533490"} 2023-07-17 11:15:33,491 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure a10b405e356fcaddfcbc67928a39fb55, server=jenkins-hbase4.apache.org,44865,1689592532436}] 2023-07-17 11:15:33,493 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure d77a0022a9c08cb516405b45516f40b4, server=jenkins-hbase4.apache.org,44865,1689592532436}] 2023-07-17 11:15:33,645 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:33,645 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:33,647 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34398, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:33,652 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:33,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a10b405e356fcaddfcbc67928a39fb55, NAME => 'hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:33,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:33,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,652 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,653 INFO [StoreOpener-a10b405e356fcaddfcbc67928a39fb55-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,655 DEBUG [StoreOpener-a10b405e356fcaddfcbc67928a39fb55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/info 2023-07-17 11:15:33,655 DEBUG [StoreOpener-a10b405e356fcaddfcbc67928a39fb55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/info 2023-07-17 11:15:33,655 INFO [StoreOpener-a10b405e356fcaddfcbc67928a39fb55-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a10b405e356fcaddfcbc67928a39fb55 columnFamilyName info 2023-07-17 11:15:33,656 INFO [StoreOpener-a10b405e356fcaddfcbc67928a39fb55-1] regionserver.HStore(310): Store=a10b405e356fcaddfcbc67928a39fb55/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:33,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,659 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:33,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:33,662 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a10b405e356fcaddfcbc67928a39fb55; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11983011360, jitterRate=0.11600489914417267}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:33,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a10b405e356fcaddfcbc67928a39fb55: 2023-07-17 11:15:33,663 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55., pid=8, masterSystemTime=1689592533645 2023-07-17 11:15:33,667 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:33,667 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:33,667 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:33,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d77a0022a9c08cb516405b45516f40b4, NAME => 'hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:33,668 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a10b405e356fcaddfcbc67928a39fb55, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:33,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 11:15:33,668 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592533668"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592533668"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592533668"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592533668"}]},"ts":"1689592533668"} 2023-07-17 11:15:33,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. service=MultiRowMutationService 2023-07-17 11:15:33,668 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 11:15:33,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:33,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,668 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,670 INFO [StoreOpener-d77a0022a9c08cb516405b45516f40b4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,672 DEBUG [StoreOpener-d77a0022a9c08cb516405b45516f40b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/m 2023-07-17 11:15:33,672 DEBUG [StoreOpener-d77a0022a9c08cb516405b45516f40b4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/m 2023-07-17 11:15:33,672 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-17 11:15:33,672 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure a10b405e356fcaddfcbc67928a39fb55, server=jenkins-hbase4.apache.org,44865,1689592532436 in 179 msec 2023-07-17 11:15:33,672 INFO [StoreOpener-d77a0022a9c08cb516405b45516f40b4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d77a0022a9c08cb516405b45516f40b4 columnFamilyName m 2023-07-17 11:15:33,673 INFO [StoreOpener-d77a0022a9c08cb516405b45516f40b4-1] regionserver.HStore(310): Store=d77a0022a9c08cb516405b45516f40b4/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:33,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,674 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-17 11:15:33,674 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a10b405e356fcaddfcbc67928a39fb55, ASSIGN in 190 msec 2023-07-17 11:15:33,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,675 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:33,675 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592533675"}]},"ts":"1689592533675"} 2023-07-17 11:15:33,676 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-17 11:15:33,677 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:33,678 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:33,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:33,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 319 msec 2023-07-17 11:15:33,680 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d77a0022a9c08cb516405b45516f40b4; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@698c6505, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:33,680 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d77a0022a9c08cb516405b45516f40b4: 2023-07-17 11:15:33,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-17 11:15:33,681 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4., pid=9, masterSystemTime=1689592533645 2023-07-17 11:15:33,681 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:33,681 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:33,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:33,683 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:33,684 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=d77a0022a9c08cb516405b45516f40b4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:33,684 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592533684"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592533684"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592533684"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592533684"}]},"ts":"1689592533684"} 2023-07-17 11:15:33,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:33,687 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34412, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:33,691 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-17 11:15:33,691 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure d77a0022a9c08cb516405b45516f40b4, server=jenkins-hbase4.apache.org,44865,1689592532436 in 194 msec 2023-07-17 11:15:33,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-17 11:15:33,694 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-17 11:15:33,694 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=d77a0022a9c08cb516405b45516f40b4, ASSIGN in 205 msec 2023-07-17 11:15:33,695 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:33,695 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592533695"}]},"ts":"1689592533695"} 2023-07-17 11:15:33,697 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-17 11:15:33,700 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:33,701 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 345 msec 2023-07-17 11:15:33,702 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:33,706 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 14 msec 2023-07-17 11:15:33,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-17 11:15:33,722 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:33,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-07-17 11:15:33,739 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-17 11:15:33,741 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-17 11:15:33,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.180sec 2023-07-17 11:15:33,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-17 11:15:33,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:33,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-17 11:15:33,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-17 11:15:33,745 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:33,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-17 11:15:33,746 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:33,748 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:33,749 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16 empty. 2023-07-17 11:15:33,749 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:33,749 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-17 11:15:33,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-17 11:15:33,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-17 11:15:33,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:33,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:33,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-17 11:15:33,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-17 11:15:33,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39741,1689592532293-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-17 11:15:33,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39741,1689592532293-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-17 11:15:33,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-17 11:15:33,771 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:33,775 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2d4a82ccce30e6b66fadfd02201a4e16, NAME => 'hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp 2023-07-17 11:15:33,776 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-17 11:15:33,777 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-17 11:15:33,780 DEBUG [Listener at localhost/40211] zookeeper.ReadOnlyZKClient(139): Connect 0x18aff7c6 to 127.0.0.1:60132 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:33,796 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-17 11:15:33,796 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-17 11:15:33,796 DEBUG [Listener at localhost/40211] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@9f33695, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:33,803 DEBUG [hconnection-0x3abb410a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:33,803 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:33,803 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 2d4a82ccce30e6b66fadfd02201a4e16, disabling compactions & flushes 2023-07-17 11:15:33,803 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:33,803 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:33,803 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. after waiting 0 ms 2023-07-17 11:15:33,803 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:33,803 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:33,803 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 2d4a82ccce30e6b66fadfd02201a4e16: 2023-07-17 11:15:33,804 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:33,804 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:33,805 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 11:15:33,805 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37376, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:33,806 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,39741,1689592532293] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-17 11:15:33,807 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:33,807 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:33,807 INFO [Listener at localhost/40211] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:33,810 DEBUG [Listener at localhost/40211] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-17 11:15:33,810 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689592533808"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592533808"}]},"ts":"1689592533808"} 2023-07-17 11:15:33,812 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36976, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-17 11:15:33,812 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:33,813 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:33,813 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592533813"}]},"ts":"1689592533813"} 2023-07-17 11:15:33,814 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-17 11:15:33,815 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-17 11:15:33,815 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:33,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-17 11:15:33,817 DEBUG [Listener at localhost/40211] zookeeper.ReadOnlyZKClient(139): Connect 0x4e7f4242 to 127.0.0.1:60132 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:33,819 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:33,819 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:33,819 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:33,819 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:33,819 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:33,819 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=2d4a82ccce30e6b66fadfd02201a4e16, ASSIGN}] 2023-07-17 11:15:33,820 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=2d4a82ccce30e6b66fadfd02201a4e16, ASSIGN 2023-07-17 11:15:33,821 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=2d4a82ccce30e6b66fadfd02201a4e16, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36481,1689592532488; forceNewPlan=false, retain=false 2023-07-17 11:15:33,829 DEBUG [Listener at localhost/40211] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2eace996, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:33,830 INFO [Listener at localhost/40211] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:60132 2023-07-17 11:15:33,832 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:33,835 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10172fe901c000a connected 2023-07-17 11:15:33,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-17 11:15:33,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-17 11:15:33,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-17 11:15:33,858 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:33,864 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 21 msec 2023-07-17 11:15:33,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-17 11:15:33,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:33,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-17 11:15:33,957 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:33,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-17 11:15:33,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 11:15:33,959 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:33,959 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 11:15:33,961 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:33,963 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65 2023-07-17 11:15:33,963 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65 empty. 2023-07-17 11:15:33,964 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65 2023-07-17 11:15:33,964 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-17 11:15:33,971 INFO [jenkins-hbase4:39741] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:33,972 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2d4a82ccce30e6b66fadfd02201a4e16, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:33,973 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689592533972"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592533972"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592533972"}]},"ts":"1689592533972"} 2023-07-17 11:15:33,974 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 2d4a82ccce30e6b66fadfd02201a4e16, server=jenkins-hbase4.apache.org,36481,1689592532488}] 2023-07-17 11:15:33,981 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:33,982 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 373419616907eadbff845900b1acec65, NAME => 'np1:table1,,1689592533954.373419616907eadbff845900b1acec65.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp 2023-07-17 11:15:33,997 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689592533954.373419616907eadbff845900b1acec65.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:33,997 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 373419616907eadbff845900b1acec65, disabling compactions & flushes 2023-07-17 11:15:33,997 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:33,997 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:33,997 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689592533954.373419616907eadbff845900b1acec65. after waiting 0 ms 2023-07-17 11:15:33,997 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:33,997 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:33,997 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 373419616907eadbff845900b1acec65: 2023-07-17 11:15:34,000 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:34,001 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689592533954.373419616907eadbff845900b1acec65.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592534001"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592534001"}]},"ts":"1689592534001"} 2023-07-17 11:15:34,002 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:34,003 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:34,003 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592534003"}]},"ts":"1689592534003"} 2023-07-17 11:15:34,004 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-17 11:15:34,006 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:34,006 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:34,006 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:34,006 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:34,007 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:34,007 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=373419616907eadbff845900b1acec65, ASSIGN}] 2023-07-17 11:15:34,007 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=373419616907eadbff845900b1acec65, ASSIGN 2023-07-17 11:15:34,008 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=373419616907eadbff845900b1acec65, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36481,1689592532488; forceNewPlan=false, retain=false 2023-07-17 11:15:34,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 11:15:34,131 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:34,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d4a82ccce30e6b66fadfd02201a4e16, NAME => 'hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:34,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:34,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,132 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,133 INFO [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,135 DEBUG [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16/q 2023-07-17 11:15:34,135 DEBUG [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16/q 2023-07-17 11:15:34,135 INFO [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d4a82ccce30e6b66fadfd02201a4e16 columnFamilyName q 2023-07-17 11:15:34,136 INFO [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] regionserver.HStore(310): Store=2d4a82ccce30e6b66fadfd02201a4e16/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:34,136 INFO [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,137 DEBUG [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16/u 2023-07-17 11:15:34,137 DEBUG [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16/u 2023-07-17 11:15:34,137 INFO [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d4a82ccce30e6b66fadfd02201a4e16 columnFamilyName u 2023-07-17 11:15:34,138 INFO [StoreOpener-2d4a82ccce30e6b66fadfd02201a4e16-1] regionserver.HStore(310): Store=2d4a82ccce30e6b66fadfd02201a4e16/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:34,139 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,139 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,141 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-17 11:15:34,142 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:34,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:34,144 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2d4a82ccce30e6b66fadfd02201a4e16; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10719577920, jitterRate=-0.0016615092754364014}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-17 11:15:34,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2d4a82ccce30e6b66fadfd02201a4e16: 2023-07-17 11:15:34,145 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16., pid=16, masterSystemTime=1689592534128 2023-07-17 11:15:34,146 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:34,147 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:34,147 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2d4a82ccce30e6b66fadfd02201a4e16, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:34,147 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689592534147"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592534147"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592534147"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592534147"}]},"ts":"1689592534147"} 2023-07-17 11:15:34,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-17 11:15:34,150 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 2d4a82ccce30e6b66fadfd02201a4e16, server=jenkins-hbase4.apache.org,36481,1689592532488 in 174 msec 2023-07-17 11:15:34,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-17 11:15:34,151 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=2d4a82ccce30e6b66fadfd02201a4e16, ASSIGN in 330 msec 2023-07-17 11:15:34,152 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:34,152 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592534152"}]},"ts":"1689592534152"} 2023-07-17 11:15:34,153 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-17 11:15:34,155 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:34,156 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 414 msec 2023-07-17 11:15:34,158 INFO [jenkins-hbase4:39741] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:34,160 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=373419616907eadbff845900b1acec65, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:34,160 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689592533954.373419616907eadbff845900b1acec65.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592534160"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592534160"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592534160"}]},"ts":"1689592534160"} 2023-07-17 11:15:34,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 373419616907eadbff845900b1acec65, server=jenkins-hbase4.apache.org,36481,1689592532488}] 2023-07-17 11:15:34,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 11:15:34,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:34,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 373419616907eadbff845900b1acec65, NAME => 'np1:table1,,1689592533954.373419616907eadbff845900b1acec65.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:34,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 373419616907eadbff845900b1acec65 2023-07-17 11:15:34,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689592533954.373419616907eadbff845900b1acec65.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:34,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 373419616907eadbff845900b1acec65 2023-07-17 11:15:34,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 373419616907eadbff845900b1acec65 2023-07-17 11:15:34,319 INFO [StoreOpener-373419616907eadbff845900b1acec65-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 373419616907eadbff845900b1acec65 2023-07-17 11:15:34,321 DEBUG [StoreOpener-373419616907eadbff845900b1acec65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/np1/table1/373419616907eadbff845900b1acec65/fam1 2023-07-17 11:15:34,321 DEBUG [StoreOpener-373419616907eadbff845900b1acec65-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/np1/table1/373419616907eadbff845900b1acec65/fam1 2023-07-17 11:15:34,321 INFO [StoreOpener-373419616907eadbff845900b1acec65-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 373419616907eadbff845900b1acec65 columnFamilyName fam1 2023-07-17 11:15:34,322 INFO [StoreOpener-373419616907eadbff845900b1acec65-1] regionserver.HStore(310): Store=373419616907eadbff845900b1acec65/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:34,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/np1/table1/373419616907eadbff845900b1acec65 2023-07-17 11:15:34,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/np1/table1/373419616907eadbff845900b1acec65 2023-07-17 11:15:34,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 373419616907eadbff845900b1acec65 2023-07-17 11:15:34,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/np1/table1/373419616907eadbff845900b1acec65/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:34,328 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 373419616907eadbff845900b1acec65; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10926632640, jitterRate=0.0176219642162323}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:34,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 373419616907eadbff845900b1acec65: 2023-07-17 11:15:34,329 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689592533954.373419616907eadbff845900b1acec65., pid=18, masterSystemTime=1689592534314 2023-07-17 11:15:34,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:34,330 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:34,330 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=373419616907eadbff845900b1acec65, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:34,331 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689592533954.373419616907eadbff845900b1acec65.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592534330"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592534330"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592534330"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592534330"}]},"ts":"1689592534330"} 2023-07-17 11:15:34,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-17 11:15:34,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 373419616907eadbff845900b1acec65, server=jenkins-hbase4.apache.org,36481,1689592532488 in 171 msec 2023-07-17 11:15:34,337 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-17 11:15:34,337 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=373419616907eadbff845900b1acec65, ASSIGN in 327 msec 2023-07-17 11:15:34,337 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:34,338 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592534337"}]},"ts":"1689592534337"} 2023-07-17 11:15:34,339 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-17 11:15:34,341 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:34,342 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 387 msec 2023-07-17 11:15:34,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-17 11:15:34,567 INFO [Listener at localhost/40211] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-17 11:15:34,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:34,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-17 11:15:34,571 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:34,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-17 11:15:34,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 11:15:34,594 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=25 msec 2023-07-17 11:15:34,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 11:15:34,676 INFO [Listener at localhost/40211] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-17 11:15:34,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:34,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:34,679 INFO [Listener at localhost/40211] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-17 11:15:34,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-17 11:15:34,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-17 11:15:34,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 11:15:34,683 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592534683"}]},"ts":"1689592534683"} 2023-07-17 11:15:34,684 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-17 11:15:34,685 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-17 11:15:34,686 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=373419616907eadbff845900b1acec65, UNASSIGN}] 2023-07-17 11:15:34,687 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=373419616907eadbff845900b1acec65, UNASSIGN 2023-07-17 11:15:34,687 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=373419616907eadbff845900b1acec65, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:34,687 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689592533954.373419616907eadbff845900b1acec65.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592534687"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592534687"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592534687"}]},"ts":"1689592534687"} 2023-07-17 11:15:34,689 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 373419616907eadbff845900b1acec65, server=jenkins-hbase4.apache.org,36481,1689592532488}] 2023-07-17 11:15:34,767 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-17 11:15:34,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 11:15:34,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 373419616907eadbff845900b1acec65 2023-07-17 11:15:34,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 373419616907eadbff845900b1acec65, disabling compactions & flushes 2023-07-17 11:15:34,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:34,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:34,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689592533954.373419616907eadbff845900b1acec65. after waiting 0 ms 2023-07-17 11:15:34,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:34,846 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/np1/table1/373419616907eadbff845900b1acec65/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:34,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689592533954.373419616907eadbff845900b1acec65. 2023-07-17 11:15:34,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 373419616907eadbff845900b1acec65: 2023-07-17 11:15:34,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 373419616907eadbff845900b1acec65 2023-07-17 11:15:34,849 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=373419616907eadbff845900b1acec65, regionState=CLOSED 2023-07-17 11:15:34,849 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689592533954.373419616907eadbff845900b1acec65.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592534849"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592534849"}]},"ts":"1689592534849"} 2023-07-17 11:15:34,851 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-17 11:15:34,852 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 373419616907eadbff845900b1acec65, server=jenkins-hbase4.apache.org,36481,1689592532488 in 161 msec 2023-07-17 11:15:34,853 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-17 11:15:34,853 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=373419616907eadbff845900b1acec65, UNASSIGN in 165 msec 2023-07-17 11:15:34,853 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592534853"}]},"ts":"1689592534853"} 2023-07-17 11:15:34,854 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-17 11:15:34,857 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-17 11:15:34,859 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 179 msec 2023-07-17 11:15:34,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 11:15:34,985 INFO [Listener at localhost/40211] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-17 11:15:34,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-17 11:15:34,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-17 11:15:34,989 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 11:15:34,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-17 11:15:34,989 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 11:15:34,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:34,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 11:15:34,993 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65 2023-07-17 11:15:34,995 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65/fam1, FileablePath, hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65/recovered.edits] 2023-07-17 11:15:34,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-17 11:15:35,001 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65/recovered.edits/4.seqid to hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/archive/data/np1/table1/373419616907eadbff845900b1acec65/recovered.edits/4.seqid 2023-07-17 11:15:35,002 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/.tmp/data/np1/table1/373419616907eadbff845900b1acec65 2023-07-17 11:15:35,002 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-17 11:15:35,004 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 11:15:35,006 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-17 11:15:35,007 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-17 11:15:35,008 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 11:15:35,008 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-17 11:15:35,008 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689592533954.373419616907eadbff845900b1acec65.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592535008"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:35,009 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 11:15:35,010 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 373419616907eadbff845900b1acec65, NAME => 'np1:table1,,1689592533954.373419616907eadbff845900b1acec65.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 11:15:35,010 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-17 11:15:35,010 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592535010"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:35,011 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-17 11:15:35,014 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-17 11:15:35,015 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 28 msec 2023-07-17 11:15:35,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-17 11:15:35,096 INFO [Listener at localhost/40211] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-17 11:15:35,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-17 11:15:35,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-17 11:15:35,110 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 11:15:35,112 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 11:15:35,115 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 11:15:35,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-17 11:15:35,116 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-17 11:15:35,116 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:35,116 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 11:15:35,118 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-17 11:15:35,119 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-17 11:15:35,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39741] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-17 11:15:35,216 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-17 11:15:35,216 INFO [Listener at localhost/40211] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-17 11:15:35,217 DEBUG [Listener at localhost/40211] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x18aff7c6 to 127.0.0.1:60132 2023-07-17 11:15:35,217 DEBUG [Listener at localhost/40211] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,217 DEBUG [Listener at localhost/40211] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-17 11:15:35,217 DEBUG [Listener at localhost/40211] util.JVMClusterUtil(257): Found active master hash=1834717669, stopped=false 2023-07-17 11:15:35,217 DEBUG [Listener at localhost/40211] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 11:15:35,217 DEBUG [Listener at localhost/40211] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 11:15:35,217 DEBUG [Listener at localhost/40211] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-17 11:15:35,217 INFO [Listener at localhost/40211] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:35,220 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:35,220 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:35,220 INFO [Listener at localhost/40211] procedure2.ProcedureExecutor(629): Stopping 2023-07-17 11:15:35,220 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:35,220 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:35,220 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:35,220 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:35,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:35,222 DEBUG [Listener at localhost/40211] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x71d6e237 to 127.0.0.1:60132 2023-07-17 11:15:35,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:35,222 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:35,222 DEBUG [Listener at localhost/40211] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,223 INFO [Listener at localhost/40211] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35593,1689592532384' ***** 2023-07-17 11:15:35,223 INFO [Listener at localhost/40211] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:35,223 INFO [Listener at localhost/40211] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44865,1689592532436' ***** 2023-07-17 11:15:35,223 INFO [Listener at localhost/40211] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:35,223 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:35,223 INFO [Listener at localhost/40211] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36481,1689592532488' ***** 2023-07-17 11:15:35,223 INFO [Listener at localhost/40211] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:35,223 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:35,224 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:35,235 INFO [RS:0;jenkins-hbase4:35593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@564ec33e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:35,236 INFO [RS:1;jenkins-hbase4:44865] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@21829d82{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:35,236 INFO [RS:2;jenkins-hbase4:36481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@52ea318a{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:35,236 INFO [RS:0;jenkins-hbase4:35593] server.AbstractConnector(383): Stopped ServerConnector@636fde24{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:35,236 INFO [RS:1;jenkins-hbase4:44865] server.AbstractConnector(383): Stopped ServerConnector@2433db5c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:35,236 INFO [RS:2;jenkins-hbase4:36481] server.AbstractConnector(383): Stopped ServerConnector@1fc4706d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:35,236 INFO [RS:0;jenkins-hbase4:35593] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:35,236 INFO [RS:2;jenkins-hbase4:36481] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:35,236 INFO [RS:1;jenkins-hbase4:44865] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:35,237 INFO [RS:0;jenkins-hbase4:35593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4e470c74{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:35,239 INFO [RS:2;jenkins-hbase4:36481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@535e2d7a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:35,239 INFO [RS:1;jenkins-hbase4:44865] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@384b9383{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:35,239 INFO [RS:2;jenkins-hbase4:36481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@507250bb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:35,239 INFO [RS:1;jenkins-hbase4:44865] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@e86671b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:35,239 INFO [RS:0;jenkins-hbase4:35593] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@28d28178{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:35,240 INFO [RS:0;jenkins-hbase4:35593] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:35,240 INFO [RS:2;jenkins-hbase4:36481] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:35,240 INFO [RS:0;jenkins-hbase4:35593] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:35,240 INFO [RS:2;jenkins-hbase4:36481] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:35,240 INFO [RS:0;jenkins-hbase4:35593] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:35,240 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:35,240 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:35,240 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:35,241 DEBUG [RS:0;jenkins-hbase4:35593] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39cce16e to 127.0.0.1:60132 2023-07-17 11:15:35,241 DEBUG [RS:0;jenkins-hbase4:35593] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,240 INFO [RS:2;jenkins-hbase4:36481] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:35,243 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(3305): Received CLOSE for 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:35,243 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35593,1689592532384; all regions closed. 2023-07-17 11:15:35,243 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:35,243 DEBUG [RS:0;jenkins-hbase4:35593] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-17 11:15:35,243 INFO [RS:1;jenkins-hbase4:44865] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:35,243 DEBUG [RS:2;jenkins-hbase4:36481] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x76c3fb05 to 127.0.0.1:60132 2023-07-17 11:15:35,243 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:35,244 INFO [RS:1;jenkins-hbase4:44865] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:35,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2d4a82ccce30e6b66fadfd02201a4e16, disabling compactions & flushes 2023-07-17 11:15:35,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:35,244 INFO [RS:1;jenkins-hbase4:44865] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:35,244 DEBUG [RS:2;jenkins-hbase4:36481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,245 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(3305): Received CLOSE for d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:35,245 INFO [RS:2;jenkins-hbase4:36481] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:35,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:35,245 INFO [RS:2;jenkins-hbase4:36481] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:35,245 INFO [RS:2;jenkins-hbase4:36481] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:35,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. after waiting 0 ms 2023-07-17 11:15:35,245 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-17 11:15:35,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:35,251 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(3305): Received CLOSE for a10b405e356fcaddfcbc67928a39fb55 2023-07-17 11:15:35,251 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:35,251 DEBUG [RS:1;jenkins-hbase4:44865] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x491dfe2d to 127.0.0.1:60132 2023-07-17 11:15:35,251 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-17 11:15:35,251 DEBUG [RS:1;jenkins-hbase4:44865] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,251 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 2d4a82ccce30e6b66fadfd02201a4e16=hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16.} 2023-07-17 11:15:35,252 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d77a0022a9c08cb516405b45516f40b4, disabling compactions & flushes 2023-07-17 11:15:35,252 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-17 11:15:35,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:35,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:35,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. after waiting 0 ms 2023-07-17 11:15:35,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:35,253 DEBUG [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1504): Waiting on 1588230740, 2d4a82ccce30e6b66fadfd02201a4e16 2023-07-17 11:15:35,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d77a0022a9c08cb516405b45516f40b4 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-17 11:15:35,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 11:15:35,253 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 11:15:35,253 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1478): Online Regions={d77a0022a9c08cb516405b45516f40b4=hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4., a10b405e356fcaddfcbc67928a39fb55=hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55.} 2023-07-17 11:15:35,254 DEBUG [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1504): Waiting on a10b405e356fcaddfcbc67928a39fb55, d77a0022a9c08cb516405b45516f40b4 2023-07-17 11:15:35,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 11:15:35,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 11:15:35,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 11:15:35,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-17 11:15:35,258 DEBUG [RS:0;jenkins-hbase4:35593] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs 2023-07-17 11:15:35,258 INFO [RS:0;jenkins-hbase4:35593] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35593%2C1689592532384:(num 1689592533065) 2023-07-17 11:15:35,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/quota/2d4a82ccce30e6b66fadfd02201a4e16/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:35,258 DEBUG [RS:0;jenkins-hbase4:35593] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,258 INFO [RS:0;jenkins-hbase4:35593] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:35,259 INFO [RS:0;jenkins-hbase4:35593] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:35,259 INFO [RS:0;jenkins-hbase4:35593] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:35,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:35,259 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:35,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2d4a82ccce30e6b66fadfd02201a4e16: 2023-07-17 11:15:35,260 INFO [RS:0;jenkins-hbase4:35593] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:35,260 INFO [RS:0;jenkins-hbase4:35593] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:35,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689592533741.2d4a82ccce30e6b66fadfd02201a4e16. 2023-07-17 11:15:35,261 INFO [RS:0;jenkins-hbase4:35593] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35593 2023-07-17 11:15:35,270 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/.tmp/info/c6b8028d20f1492e8627d8514cb9c640 2023-07-17 11:15:35,270 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/.tmp/m/e9b53ccc5a2e4abe9f517f7c5ecc9f17 2023-07-17 11:15:35,275 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6b8028d20f1492e8627d8514cb9c640 2023-07-17 11:15:35,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/.tmp/m/e9b53ccc5a2e4abe9f517f7c5ecc9f17 as hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/m/e9b53ccc5a2e4abe9f517f7c5ecc9f17 2023-07-17 11:15:35,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/m/e9b53ccc5a2e4abe9f517f7c5ecc9f17, entries=1, sequenceid=7, filesize=4.9 K 2023-07-17 11:15:35,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for d77a0022a9c08cb516405b45516f40b4 in 32ms, sequenceid=7, compaction requested=false 2023-07-17 11:15:35,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-17 11:15:35,289 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/.tmp/rep_barrier/ce58ca3b1f494daca6b7ee7835863f42 2023-07-17 11:15:35,291 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/rsgroup/d77a0022a9c08cb516405b45516f40b4/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-17 11:15:35,292 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:35,292 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:35,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d77a0022a9c08cb516405b45516f40b4: 2023-07-17 11:15:35,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689592533355.d77a0022a9c08cb516405b45516f40b4. 2023-07-17 11:15:35,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a10b405e356fcaddfcbc67928a39fb55, disabling compactions & flushes 2023-07-17 11:15:35,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:35,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:35,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. after waiting 0 ms 2023-07-17 11:15:35,293 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:35,293 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a10b405e356fcaddfcbc67928a39fb55 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-17 11:15:35,295 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce58ca3b1f494daca6b7ee7835863f42 2023-07-17 11:15:35,300 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:35,300 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:35,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/.tmp/info/52c6ca85af3f43eb9d89d0c13aab6593 2023-07-17 11:15:35,307 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/.tmp/table/32a5ac4435724d068801ff3c29ecc600 2023-07-17 11:15:35,313 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 32a5ac4435724d068801ff3c29ecc600 2023-07-17 11:15:35,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 52c6ca85af3f43eb9d89d0c13aab6593 2023-07-17 11:15:35,314 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/.tmp/info/c6b8028d20f1492e8627d8514cb9c640 as hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/info/c6b8028d20f1492e8627d8514cb9c640 2023-07-17 11:15:35,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/.tmp/info/52c6ca85af3f43eb9d89d0c13aab6593 as hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/info/52c6ca85af3f43eb9d89d0c13aab6593 2023-07-17 11:15:35,318 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:35,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 52c6ca85af3f43eb9d89d0c13aab6593 2023-07-17 11:15:35,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/info/52c6ca85af3f43eb9d89d0c13aab6593, entries=3, sequenceid=8, filesize=5.0 K 2023-07-17 11:15:35,322 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6b8028d20f1492e8627d8514cb9c640 2023-07-17 11:15:35,322 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/info/c6b8028d20f1492e8627d8514cb9c640, entries=32, sequenceid=31, filesize=8.5 K 2023-07-17 11:15:35,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for a10b405e356fcaddfcbc67928a39fb55 in 30ms, sequenceid=8, compaction requested=false 2023-07-17 11:15:35,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-17 11:15:35,323 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/.tmp/rep_barrier/ce58ca3b1f494daca6b7ee7835863f42 as hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/rep_barrier/ce58ca3b1f494daca6b7ee7835863f42 2023-07-17 11:15:35,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/namespace/a10b405e356fcaddfcbc67928a39fb55/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-17 11:15:35,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:35,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a10b405e356fcaddfcbc67928a39fb55: 2023-07-17 11:15:35,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689592533358.a10b405e356fcaddfcbc67928a39fb55. 2023-07-17 11:15:35,331 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce58ca3b1f494daca6b7ee7835863f42 2023-07-17 11:15:35,331 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/rep_barrier/ce58ca3b1f494daca6b7ee7835863f42, entries=1, sequenceid=31, filesize=4.9 K 2023-07-17 11:15:35,332 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/.tmp/table/32a5ac4435724d068801ff3c29ecc600 as hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/table/32a5ac4435724d068801ff3c29ecc600 2023-07-17 11:15:35,338 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 32a5ac4435724d068801ff3c29ecc600 2023-07-17 11:15:35,338 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/table/32a5ac4435724d068801ff3c29ecc600, entries=8, sequenceid=31, filesize=5.2 K 2023-07-17 11:15:35,339 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 85ms, sequenceid=31, compaction requested=false 2023-07-17 11:15:35,339 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-17 11:15:35,346 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:35,346 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:35,346 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:35,346 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:35,346 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:35,346 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35593,1689592532384 2023-07-17 11:15:35,346 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:35,350 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35593,1689592532384] 2023-07-17 11:15:35,350 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35593,1689592532384; numProcessing=1 2023-07-17 11:15:35,353 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35593,1689592532384 already deleted, retry=false 2023-07-17 11:15:35,353 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35593,1689592532384 expired; onlineServers=2 2023-07-17 11:15:35,356 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-17 11:15:35,357 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:35,357 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:35,357 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 11:15:35,357 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:35,453 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36481,1689592532488; all regions closed. 2023-07-17 11:15:35,453 DEBUG [RS:2;jenkins-hbase4:36481] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-17 11:15:35,454 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44865,1689592532436; all regions closed. 2023-07-17 11:15:35,454 DEBUG [RS:1;jenkins-hbase4:44865] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-17 11:15:35,461 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/WALs/jenkins-hbase4.apache.org,36481,1689592532488/jenkins-hbase4.apache.org%2C36481%2C1689592532488.meta.1689592533296.meta not finished, retry = 0 2023-07-17 11:15:35,463 DEBUG [RS:1;jenkins-hbase4:44865] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs 2023-07-17 11:15:35,463 INFO [RS:1;jenkins-hbase4:44865] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44865%2C1689592532436:(num 1689592533066) 2023-07-17 11:15:35,464 DEBUG [RS:1;jenkins-hbase4:44865] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,464 INFO [RS:1;jenkins-hbase4:44865] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:35,464 INFO [RS:1;jenkins-hbase4:44865] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:35,464 INFO [RS:1;jenkins-hbase4:44865] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:35,464 INFO [RS:1;jenkins-hbase4:44865] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:35,464 INFO [RS:1;jenkins-hbase4:44865] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:35,464 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:35,465 INFO [RS:1;jenkins-hbase4:44865] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44865 2023-07-17 11:15:35,469 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:35,469 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44865,1689592532436 2023-07-17 11:15:35,469 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:35,471 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44865,1689592532436] 2023-07-17 11:15:35,471 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44865,1689592532436; numProcessing=2 2023-07-17 11:15:35,472 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44865,1689592532436 already deleted, retry=false 2023-07-17 11:15:35,472 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44865,1689592532436 expired; onlineServers=1 2023-07-17 11:15:35,563 DEBUG [RS:2;jenkins-hbase4:36481] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs 2023-07-17 11:15:35,563 INFO [RS:2;jenkins-hbase4:36481] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36481%2C1689592532488.meta:.meta(num 1689592533296) 2023-07-17 11:15:35,568 DEBUG [RS:2;jenkins-hbase4:36481] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/oldWALs 2023-07-17 11:15:35,568 INFO [RS:2;jenkins-hbase4:36481] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36481%2C1689592532488:(num 1689592533047) 2023-07-17 11:15:35,568 DEBUG [RS:2;jenkins-hbase4:36481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,568 INFO [RS:2;jenkins-hbase4:36481] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:35,569 INFO [RS:2;jenkins-hbase4:36481] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:35,569 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:35,570 INFO [RS:2;jenkins-hbase4:36481] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36481 2023-07-17 11:15:35,574 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36481,1689592532488 2023-07-17 11:15:35,574 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:35,574 ERROR [Listener at localhost/40211-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@33275f4b rejected from java.util.concurrent.ThreadPoolExecutor@3718ddf7[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 7] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-17 11:15:35,575 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36481,1689592532488] 2023-07-17 11:15:35,575 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36481,1689592532488; numProcessing=3 2023-07-17 11:15:35,576 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36481,1689592532488 already deleted, retry=false 2023-07-17 11:15:35,576 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36481,1689592532488 expired; onlineServers=0 2023-07-17 11:15:35,576 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39741,1689592532293' ***** 2023-07-17 11:15:35,576 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-17 11:15:35,577 DEBUG [M:0;jenkins-hbase4:39741] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69640896, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:35,577 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:35,578 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:35,578 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:35,578 INFO [M:0;jenkins-hbase4:39741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@67a95324{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 11:15:35,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:35,579 INFO [M:0;jenkins-hbase4:39741] server.AbstractConnector(383): Stopped ServerConnector@31bc4bc7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:35,579 INFO [M:0;jenkins-hbase4:39741] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:35,579 INFO [M:0;jenkins-hbase4:39741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68104050{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:35,579 INFO [M:0;jenkins-hbase4:39741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@681a326f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:35,580 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39741,1689592532293 2023-07-17 11:15:35,580 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39741,1689592532293; all regions closed. 2023-07-17 11:15:35,580 DEBUG [M:0;jenkins-hbase4:39741] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:35,580 INFO [M:0;jenkins-hbase4:39741] master.HMaster(1491): Stopping master jetty server 2023-07-17 11:15:35,581 INFO [M:0;jenkins-hbase4:39741] server.AbstractConnector(383): Stopped ServerConnector@fb521b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:35,581 DEBUG [M:0;jenkins-hbase4:39741] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-17 11:15:35,581 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-17 11:15:35,581 DEBUG [M:0;jenkins-hbase4:39741] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-17 11:15:35,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592532821] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592532821,5,FailOnTimeoutGroup] 2023-07-17 11:15:35,582 INFO [M:0;jenkins-hbase4:39741] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-17 11:15:35,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592532825] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592532825,5,FailOnTimeoutGroup] 2023-07-17 11:15:35,582 INFO [M:0;jenkins-hbase4:39741] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-17 11:15:35,583 INFO [M:0;jenkins-hbase4:39741] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:35,583 DEBUG [M:0;jenkins-hbase4:39741] master.HMaster(1512): Stopping service threads 2023-07-17 11:15:35,583 INFO [M:0;jenkins-hbase4:39741] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-17 11:15:35,583 ERROR [M:0;jenkins-hbase4:39741] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-17 11:15:35,584 INFO [M:0;jenkins-hbase4:39741] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-17 11:15:35,584 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-17 11:15:35,584 DEBUG [M:0;jenkins-hbase4:39741] zookeeper.ZKUtil(398): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-17 11:15:35,584 WARN [M:0;jenkins-hbase4:39741] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-17 11:15:35,584 INFO [M:0;jenkins-hbase4:39741] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-17 11:15:35,584 INFO [M:0;jenkins-hbase4:39741] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-17 11:15:35,585 DEBUG [M:0;jenkins-hbase4:39741] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 11:15:35,585 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:35,585 DEBUG [M:0;jenkins-hbase4:39741] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:35,585 DEBUG [M:0;jenkins-hbase4:39741] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 11:15:35,585 DEBUG [M:0;jenkins-hbase4:39741] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:35,585 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.16 KB 2023-07-17 11:15:35,598 INFO [M:0;jenkins-hbase4:39741] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/60a63cf6fef543ee9d72208676760e2a 2023-07-17 11:15:35,603 DEBUG [M:0;jenkins-hbase4:39741] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/60a63cf6fef543ee9d72208676760e2a as hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/60a63cf6fef543ee9d72208676760e2a 2023-07-17 11:15:35,608 INFO [M:0;jenkins-hbase4:39741] regionserver.HStore(1080): Added hdfs://localhost:35063/user/jenkins/test-data/6a8d1f47-f60c-3ded-892b-406ce2aab504/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/60a63cf6fef543ee9d72208676760e2a, entries=24, sequenceid=194, filesize=12.4 K 2023-07-17 11:15:35,609 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95228, heapSize ~109.14 KB/111760, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=194, compaction requested=false 2023-07-17 11:15:35,611 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:35,611 DEBUG [M:0;jenkins-hbase4:39741] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:35,614 INFO [M:0;jenkins-hbase4:39741] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-17 11:15:35,614 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:35,615 INFO [M:0;jenkins-hbase4:39741] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39741 2023-07-17 11:15:35,620 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:35,620 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:44865-0x10172fe901c0002, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:35,620 INFO [RS:1;jenkins-hbase4:44865] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44865,1689592532436; zookeeper connection closed. 2023-07-17 11:15:35,621 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@581527cd] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@581527cd 2023-07-17 11:15:35,621 DEBUG [M:0;jenkins-hbase4:39741] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,39741,1689592532293 already deleted, retry=false 2023-07-17 11:15:35,720 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:35,720 INFO [RS:0;jenkins-hbase4:35593] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35593,1689592532384; zookeeper connection closed. 2023-07-17 11:15:35,720 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:35593-0x10172fe901c0001, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:35,722 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@75b42618] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@75b42618 2023-07-17 11:15:35,921 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:35,921 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): master:39741-0x10172fe901c0000, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:35,921 INFO [M:0;jenkins-hbase4:39741] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39741,1689592532293; zookeeper connection closed. 2023-07-17 11:15:36,021 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:36,021 INFO [RS:2;jenkins-hbase4:36481] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36481,1689592532488; zookeeper connection closed. 2023-07-17 11:15:36,021 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): regionserver:36481-0x10172fe901c0003, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:36,021 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@74485b1d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@74485b1d 2023-07-17 11:15:36,022 INFO [Listener at localhost/40211] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-17 11:15:36,022 WARN [Listener at localhost/40211] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:36,031 INFO [Listener at localhost/40211] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:36,139 WARN [BP-1169258222-172.31.14.131-1689592531384 heartbeating to localhost/127.0.0.1:35063] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:36,139 WARN [BP-1169258222-172.31.14.131-1689592531384 heartbeating to localhost/127.0.0.1:35063] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1169258222-172.31.14.131-1689592531384 (Datanode Uuid 47173409-90b9-4afb-99c9-4b078d9f2377) service to localhost/127.0.0.1:35063 2023-07-17 11:15:36,141 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/dfs/data/data6/current/BP-1169258222-172.31.14.131-1689592531384] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:36,142 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/dfs/data/data5/current/BP-1169258222-172.31.14.131-1689592531384] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:36,144 WARN [Listener at localhost/40211] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:36,181 INFO [Listener at localhost/40211] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:36,287 WARN [BP-1169258222-172.31.14.131-1689592531384 heartbeating to localhost/127.0.0.1:35063] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:36,288 WARN [BP-1169258222-172.31.14.131-1689592531384 heartbeating to localhost/127.0.0.1:35063] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1169258222-172.31.14.131-1689592531384 (Datanode Uuid 0d4290b9-ba5a-4798-beb0-42d71536f5b6) service to localhost/127.0.0.1:35063 2023-07-17 11:15:36,289 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/dfs/data/data3/current/BP-1169258222-172.31.14.131-1689592531384] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:36,290 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/dfs/data/data4/current/BP-1169258222-172.31.14.131-1689592531384] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:36,292 WARN [Listener at localhost/40211] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:36,297 INFO [Listener at localhost/40211] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:36,401 WARN [BP-1169258222-172.31.14.131-1689592531384 heartbeating to localhost/127.0.0.1:35063] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:36,401 WARN [BP-1169258222-172.31.14.131-1689592531384 heartbeating to localhost/127.0.0.1:35063] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1169258222-172.31.14.131-1689592531384 (Datanode Uuid b3edc182-dd96-4c03-9a1d-664ad1190729) service to localhost/127.0.0.1:35063 2023-07-17 11:15:36,402 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/dfs/data/data1/current/BP-1169258222-172.31.14.131-1689592531384] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:36,402 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/cluster_b50245d1-747f-84ca-9fff-4598d59fad4e/dfs/data/data2/current/BP-1169258222-172.31.14.131-1689592531384] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:36,416 INFO [Listener at localhost/40211] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:36,436 INFO [Listener at localhost/40211] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-17 11:15:36,470 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-17 11:15:36,470 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-17 11:15:36,470 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.log.dir so I do NOT create it in target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc 2023-07-17 11:15:36,470 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9ec773e1-25e3-37c1-332c-cd1fc3805ad2/hadoop.tmp.dir so I do NOT create it in target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc 2023-07-17 11:15:36,470 INFO [Listener at localhost/40211] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a, deleteOnExit=true 2023-07-17 11:15:36,471 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-17 11:15:36,471 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/test.cache.data in system properties and HBase conf 2023-07-17 11:15:36,471 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.tmp.dir in system properties and HBase conf 2023-07-17 11:15:36,471 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir in system properties and HBase conf 2023-07-17 11:15:36,471 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-17 11:15:36,472 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-17 11:15:36,472 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-17 11:15:36,472 DEBUG [Listener at localhost/40211] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-17 11:15:36,472 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-17 11:15:36,472 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-17 11:15:36,472 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-17 11:15:36,473 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 11:15:36,473 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-17 11:15:36,473 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-17 11:15:36,473 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-17 11:15:36,473 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 11:15:36,473 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-17 11:15:36,474 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/nfs.dump.dir in system properties and HBase conf 2023-07-17 11:15:36,474 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/java.io.tmpdir in system properties and HBase conf 2023-07-17 11:15:36,474 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-17 11:15:36,474 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-17 11:15:36,474 INFO [Listener at localhost/40211] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-17 11:15:36,478 WARN [Listener at localhost/40211] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 11:15:36,478 WARN [Listener at localhost/40211] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 11:15:36,534 DEBUG [Listener at localhost/40211-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10172fe901c000a, quorum=127.0.0.1:60132, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-17 11:15:36,534 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10172fe901c000a, quorum=127.0.0.1:60132, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-17 11:15:36,534 WARN [Listener at localhost/40211] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:36,537 INFO [Listener at localhost/40211] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:36,553 INFO [Listener at localhost/40211] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/java.io.tmpdir/Jetty_localhost_45433_hdfs____.l0bkrn/webapp 2023-07-17 11:15:36,657 INFO [Listener at localhost/40211] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45433 2023-07-17 11:15:36,662 WARN [Listener at localhost/40211] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-17 11:15:36,662 WARN [Listener at localhost/40211] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-17 11:15:36,711 WARN [Listener at localhost/35473] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:36,732 WARN [Listener at localhost/35473] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:36,735 WARN [Listener at localhost/35473] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:36,736 INFO [Listener at localhost/35473] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:36,743 INFO [Listener at localhost/35473] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/java.io.tmpdir/Jetty_localhost_35987_datanode____11n9kt/webapp 2023-07-17 11:15:36,836 INFO [Listener at localhost/35473] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35987 2023-07-17 11:15:36,844 WARN [Listener at localhost/40361] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:36,868 WARN [Listener at localhost/40361] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:36,870 WARN [Listener at localhost/40361] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:36,871 INFO [Listener at localhost/40361] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:36,875 INFO [Listener at localhost/40361] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/java.io.tmpdir/Jetty_localhost_43129_datanode____ndnyrk/webapp 2023-07-17 11:15:36,975 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c263874a2d627cf: Processing first storage report for DS-35b57e0d-606a-455a-9013-50414e4940ce from datanode 659054a7-3fdf-4d87-9a5c-31929022026e 2023-07-17 11:15:36,975 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c263874a2d627cf: from storage DS-35b57e0d-606a-455a-9013-50414e4940ce node DatanodeRegistration(127.0.0.1:34879, datanodeUuid=659054a7-3fdf-4d87-9a5c-31929022026e, infoPort=40013, infoSecurePort=0, ipcPort=40361, storageInfo=lv=-57;cid=testClusterID;nsid=550068019;c=1689592536481), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:36,975 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c263874a2d627cf: Processing first storage report for DS-a1388865-9a8a-4ba9-802d-5779868ee90f from datanode 659054a7-3fdf-4d87-9a5c-31929022026e 2023-07-17 11:15:36,975 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c263874a2d627cf: from storage DS-a1388865-9a8a-4ba9-802d-5779868ee90f node DatanodeRegistration(127.0.0.1:34879, datanodeUuid=659054a7-3fdf-4d87-9a5c-31929022026e, infoPort=40013, infoSecurePort=0, ipcPort=40361, storageInfo=lv=-57;cid=testClusterID;nsid=550068019;c=1689592536481), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:37,003 INFO [Listener at localhost/40361] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43129 2023-07-17 11:15:37,015 WARN [Listener at localhost/33427] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:37,040 WARN [Listener at localhost/33427] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-17 11:15:37,050 WARN [Listener at localhost/33427] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-17 11:15:37,052 INFO [Listener at localhost/33427] log.Slf4jLog(67): jetty-6.1.26 2023-07-17 11:15:37,065 INFO [Listener at localhost/33427] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/java.io.tmpdir/Jetty_localhost_43699_datanode____ihd466/webapp 2023-07-17 11:15:37,143 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x439daaa214cdff98: Processing first storage report for DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627 from datanode f5374cce-db2a-4335-95cf-460dc7ce1306 2023-07-17 11:15:37,143 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x439daaa214cdff98: from storage DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627 node DatanodeRegistration(127.0.0.1:42893, datanodeUuid=f5374cce-db2a-4335-95cf-460dc7ce1306, infoPort=35785, infoSecurePort=0, ipcPort=33427, storageInfo=lv=-57;cid=testClusterID;nsid=550068019;c=1689592536481), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:37,143 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x439daaa214cdff98: Processing first storage report for DS-8a034513-5c14-4060-ac1f-2b9c1a3e1a7f from datanode f5374cce-db2a-4335-95cf-460dc7ce1306 2023-07-17 11:15:37,143 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x439daaa214cdff98: from storage DS-8a034513-5c14-4060-ac1f-2b9c1a3e1a7f node DatanodeRegistration(127.0.0.1:42893, datanodeUuid=f5374cce-db2a-4335-95cf-460dc7ce1306, infoPort=35785, infoSecurePort=0, ipcPort=33427, storageInfo=lv=-57;cid=testClusterID;nsid=550068019;c=1689592536481), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:37,195 INFO [Listener at localhost/33427] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43699 2023-07-17 11:15:37,202 WARN [Listener at localhost/33721] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-17 11:15:37,295 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x685e36ba74e10d7e: Processing first storage report for DS-059260ba-dc4a-47f2-a714-cdfcaeec5081 from datanode 5c71ddd5-7b68-4d56-b9ea-2cad1c8ea6e6 2023-07-17 11:15:37,295 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x685e36ba74e10d7e: from storage DS-059260ba-dc4a-47f2-a714-cdfcaeec5081 node DatanodeRegistration(127.0.0.1:39583, datanodeUuid=5c71ddd5-7b68-4d56-b9ea-2cad1c8ea6e6, infoPort=42545, infoSecurePort=0, ipcPort=33721, storageInfo=lv=-57;cid=testClusterID;nsid=550068019;c=1689592536481), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:37,295 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x685e36ba74e10d7e: Processing first storage report for DS-b2d04608-da56-4322-9274-9f574bf493f5 from datanode 5c71ddd5-7b68-4d56-b9ea-2cad1c8ea6e6 2023-07-17 11:15:37,296 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x685e36ba74e10d7e: from storage DS-b2d04608-da56-4322-9274-9f574bf493f5 node DatanodeRegistration(127.0.0.1:39583, datanodeUuid=5c71ddd5-7b68-4d56-b9ea-2cad1c8ea6e6, infoPort=42545, infoSecurePort=0, ipcPort=33721, storageInfo=lv=-57;cid=testClusterID;nsid=550068019;c=1689592536481), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-17 11:15:37,311 DEBUG [Listener at localhost/33721] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc 2023-07-17 11:15:37,313 INFO [Listener at localhost/33721] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/zookeeper_0, clientPort=57231, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-17 11:15:37,314 INFO [Listener at localhost/33721] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57231 2023-07-17 11:15:37,314 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,315 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,328 INFO [Listener at localhost/33721] util.FSUtils(471): Created version file at hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b with version=8 2023-07-17 11:15:37,328 INFO [Listener at localhost/33721] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:41739/user/jenkins/test-data/b3d7251a-f842-58c9-9380-652368b0df5e/hbase-staging 2023-07-17 11:15:37,329 DEBUG [Listener at localhost/33721] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-17 11:15:37,329 DEBUG [Listener at localhost/33721] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-17 11:15:37,329 DEBUG [Listener at localhost/33721] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-17 11:15:37,329 DEBUG [Listener at localhost/33721] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-17 11:15:37,330 INFO [Listener at localhost/33721] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:37,330 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,330 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,330 INFO [Listener at localhost/33721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:37,330 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,330 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:37,330 INFO [Listener at localhost/33721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:37,331 INFO [Listener at localhost/33721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35293 2023-07-17 11:15:37,331 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,332 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,333 INFO [Listener at localhost/33721] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35293 connecting to ZooKeeper ensemble=127.0.0.1:57231 2023-07-17 11:15:37,339 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:352930x0, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:37,340 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35293-0x10172fea3e40000 connected 2023-07-17 11:15:37,355 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:37,355 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:37,356 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:37,357 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35293 2023-07-17 11:15:37,357 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35293 2023-07-17 11:15:37,357 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35293 2023-07-17 11:15:37,357 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35293 2023-07-17 11:15:37,357 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35293 2023-07-17 11:15:37,359 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:37,359 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:37,359 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:37,360 INFO [Listener at localhost/33721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-17 11:15:37,360 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:37,360 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:37,360 INFO [Listener at localhost/33721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:37,361 INFO [Listener at localhost/33721] http.HttpServer(1146): Jetty bound to port 33089 2023-07-17 11:15:37,361 INFO [Listener at localhost/33721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:37,362 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,362 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e814646{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:37,362 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,363 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d808172{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:37,368 INFO [Listener at localhost/33721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:37,369 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:37,369 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:37,369 INFO [Listener at localhost/33721] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 11:15:37,370 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,371 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@26f18ca9{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 11:15:37,372 INFO [Listener at localhost/33721] server.AbstractConnector(333): Started ServerConnector@1c50d6e9{HTTP/1.1, (http/1.1)}{0.0.0.0:33089} 2023-07-17 11:15:37,372 INFO [Listener at localhost/33721] server.Server(415): Started @39771ms 2023-07-17 11:15:37,372 INFO [Listener at localhost/33721] master.HMaster(444): hbase.rootdir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b, hbase.cluster.distributed=false 2023-07-17 11:15:37,385 INFO [Listener at localhost/33721] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:37,385 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,385 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,385 INFO [Listener at localhost/33721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:37,385 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,385 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:37,385 INFO [Listener at localhost/33721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:37,386 INFO [Listener at localhost/33721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40865 2023-07-17 11:15:37,386 INFO [Listener at localhost/33721] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:37,387 DEBUG [Listener at localhost/33721] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:37,388 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,389 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,389 INFO [Listener at localhost/33721] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40865 connecting to ZooKeeper ensemble=127.0.0.1:57231 2023-07-17 11:15:37,393 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:408650x0, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:37,395 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:408650x0, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:37,395 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:408650x0, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:37,396 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:408650x0, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:37,399 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40865-0x10172fea3e40001 connected 2023-07-17 11:15:37,399 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40865 2023-07-17 11:15:37,399 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40865 2023-07-17 11:15:37,399 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40865 2023-07-17 11:15:37,400 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40865 2023-07-17 11:15:37,400 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40865 2023-07-17 11:15:37,402 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:37,402 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:37,402 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:37,402 INFO [Listener at localhost/33721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:37,402 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:37,402 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:37,403 INFO [Listener at localhost/33721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:37,403 INFO [Listener at localhost/33721] http.HttpServer(1146): Jetty bound to port 35573 2023-07-17 11:15:37,403 INFO [Listener at localhost/33721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:37,404 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,404 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@585d37a7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:37,405 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,405 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6c9b85fb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:37,409 INFO [Listener at localhost/33721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:37,410 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:37,410 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:37,410 INFO [Listener at localhost/33721] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 11:15:37,412 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,412 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6433591e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:37,413 INFO [Listener at localhost/33721] server.AbstractConnector(333): Started ServerConnector@75805bd3{HTTP/1.1, (http/1.1)}{0.0.0.0:35573} 2023-07-17 11:15:37,413 INFO [Listener at localhost/33721] server.Server(415): Started @39812ms 2023-07-17 11:15:37,425 INFO [Listener at localhost/33721] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:37,426 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,426 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,426 INFO [Listener at localhost/33721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:37,426 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,426 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:37,426 INFO [Listener at localhost/33721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:37,427 INFO [Listener at localhost/33721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32841 2023-07-17 11:15:37,427 INFO [Listener at localhost/33721] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:37,428 DEBUG [Listener at localhost/33721] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:37,428 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,429 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,430 INFO [Listener at localhost/33721] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32841 connecting to ZooKeeper ensemble=127.0.0.1:57231 2023-07-17 11:15:37,433 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:328410x0, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:37,434 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32841-0x10172fea3e40002 connected 2023-07-17 11:15:37,435 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:37,435 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:37,435 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:37,436 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32841 2023-07-17 11:15:37,436 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32841 2023-07-17 11:15:37,436 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32841 2023-07-17 11:15:37,436 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32841 2023-07-17 11:15:37,437 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32841 2023-07-17 11:15:37,438 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:37,438 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:37,438 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:37,439 INFO [Listener at localhost/33721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:37,439 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:37,439 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:37,439 INFO [Listener at localhost/33721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:37,439 INFO [Listener at localhost/33721] http.HttpServer(1146): Jetty bound to port 37393 2023-07-17 11:15:37,439 INFO [Listener at localhost/33721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:37,441 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,441 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@319792b3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:37,441 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,441 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@631c36f1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:37,445 INFO [Listener at localhost/33721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:37,446 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:37,446 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:37,446 INFO [Listener at localhost/33721] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 11:15:37,447 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,447 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4ed58f27{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:37,449 INFO [Listener at localhost/33721] server.AbstractConnector(333): Started ServerConnector@2cbc5b97{HTTP/1.1, (http/1.1)}{0.0.0.0:37393} 2023-07-17 11:15:37,449 INFO [Listener at localhost/33721] server.Server(415): Started @39848ms 2023-07-17 11:15:37,460 INFO [Listener at localhost/33721] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:37,460 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,460 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,461 INFO [Listener at localhost/33721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:37,461 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:37,461 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:37,461 INFO [Listener at localhost/33721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:37,461 INFO [Listener at localhost/33721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32847 2023-07-17 11:15:37,462 INFO [Listener at localhost/33721] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:37,463 DEBUG [Listener at localhost/33721] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:37,463 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,464 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,465 INFO [Listener at localhost/33721] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32847 connecting to ZooKeeper ensemble=127.0.0.1:57231 2023-07-17 11:15:37,468 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:328470x0, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:37,469 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32847-0x10172fea3e40003 connected 2023-07-17 11:15:37,469 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:328470x0, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:37,470 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:37,470 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:37,470 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32847 2023-07-17 11:15:37,471 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32847 2023-07-17 11:15:37,471 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32847 2023-07-17 11:15:37,471 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32847 2023-07-17 11:15:37,471 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32847 2023-07-17 11:15:37,473 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:37,473 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:37,473 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:37,474 INFO [Listener at localhost/33721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:37,474 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:37,474 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:37,474 INFO [Listener at localhost/33721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:37,474 INFO [Listener at localhost/33721] http.HttpServer(1146): Jetty bound to port 42279 2023-07-17 11:15:37,474 INFO [Listener at localhost/33721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:37,476 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,476 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d0d8e68{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:37,476 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,476 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2b102eb4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:37,481 INFO [Listener at localhost/33721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:37,482 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:37,482 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:37,482 INFO [Listener at localhost/33721] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-17 11:15:37,483 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:37,483 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@30b69b2e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:37,485 INFO [Listener at localhost/33721] server.AbstractConnector(333): Started ServerConnector@32b1a200{HTTP/1.1, (http/1.1)}{0.0.0.0:42279} 2023-07-17 11:15:37,485 INFO [Listener at localhost/33721] server.Server(415): Started @39884ms 2023-07-17 11:15:37,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:37,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@25b5d85a{HTTP/1.1, (http/1.1)}{0.0.0.0:38385} 2023-07-17 11:15:37,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @39892ms 2023-07-17 11:15:37,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:37,495 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 11:15:37,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:37,497 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:37,497 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:37,497 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:37,498 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:37,498 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:37,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:37,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35293,1689592537329 from backup master directory 2023-07-17 11:15:37,501 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:37,502 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:37,502 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-17 11:15:37,502 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:37,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:37,517 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/hbase.id with ID: 324e1712-be1d-4769-b000-883702e9bf9e 2023-07-17 11:15:37,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:37,530 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:37,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x658fd9e7 to 127.0.0.1:57231 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:37,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e1d6eac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:37,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:37,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-17 11:15:37,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:37,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store-tmp 2023-07-17 11:15:37,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:37,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 11:15:37,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:37,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:37,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 11:15:37,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:37,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:37,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:37,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/WALs/jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:37,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35293%2C1689592537329, suffix=, logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/WALs/jenkins-hbase4.apache.org,35293,1689592537329, archiveDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/oldWALs, maxLogs=10 2023-07-17 11:15:37,573 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK] 2023-07-17 11:15:37,573 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK] 2023-07-17 11:15:37,573 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK] 2023-07-17 11:15:37,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/WALs/jenkins-hbase4.apache.org,35293,1689592537329/jenkins-hbase4.apache.org%2C35293%2C1689592537329.1689592537558 2023-07-17 11:15:37,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK], DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK], DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK]] 2023-07-17 11:15:37,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:37,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:37,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:37,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:37,577 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:37,578 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-17 11:15:37,579 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-17 11:15:37,579 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:37,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:37,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:37,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-17 11:15:37,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:37,585 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10576130080, jitterRate=-0.015021130442619324}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:37,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:37,588 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-17 11:15:37,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-17 11:15:37,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-17 11:15:37,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-17 11:15:37,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-17 11:15:37,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-17 11:15:37,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-17 11:15:37,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-17 11:15:37,592 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-17 11:15:37,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-17 11:15:37,593 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-17 11:15:37,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-17 11:15:37,597 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:37,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-17 11:15:37,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-17 11:15:37,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-17 11:15:37,600 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:37,600 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:37,600 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:37,600 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:37,604 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:37,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35293,1689592537329, sessionid=0x10172fea3e40000, setting cluster-up flag (Was=false) 2023-07-17 11:15:37,610 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:37,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-17 11:15:37,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:37,619 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:37,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-17 11:15:37,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:37,625 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.hbase-snapshot/.tmp 2023-07-17 11:15:37,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-17 11:15:37,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-17 11:15:37,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-17 11:15:37,627 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:37,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-17 11:15:37,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:37,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 11:15:37,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 11:15:37,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-17 11:15:37,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:37,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689592567648 2023-07-17 11:15:37,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-17 11:15:37,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-17 11:15:37,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-17 11:15:37,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-17 11:15:37,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-17 11:15:37,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-17 11:15:37,649 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:37,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,649 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-17 11:15:37,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-17 11:15:37,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592537650,5,FailOnTimeoutGroup] 2023-07-17 11:15:37,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592537650,5,FailOnTimeoutGroup] 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,651 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:37,661 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:37,662 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:37,662 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b 2023-07-17 11:15:37,671 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:37,672 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 11:15:37,674 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/info 2023-07-17 11:15:37,674 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 11:15:37,675 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:37,675 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 11:15:37,676 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:37,676 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 11:15:37,677 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:37,677 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 11:15:37,678 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/table 2023-07-17 11:15:37,678 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 11:15:37,679 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:37,679 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740 2023-07-17 11:15:37,680 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740 2023-07-17 11:15:37,682 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 11:15:37,683 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 11:15:37,684 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:37,685 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10709109600, jitterRate=-0.002636447548866272}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 11:15:37,685 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 11:15:37,685 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 11:15:37,685 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 11:15:37,685 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 11:15:37,685 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 11:15:37,685 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 11:15:37,685 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:37,685 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 11:15:37,687 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-17 11:15:37,687 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-17 11:15:37,688 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-17 11:15:37,688 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-17 11:15:37,690 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(951): ClusterId : 324e1712-be1d-4769-b000-883702e9bf9e 2023-07-17 11:15:37,690 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:37,690 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(951): ClusterId : 324e1712-be1d-4769-b000-883702e9bf9e 2023-07-17 11:15:37,690 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:37,690 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-17 11:15:37,691 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(951): ClusterId : 324e1712-be1d-4769-b000-883702e9bf9e 2023-07-17 11:15:37,691 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:37,693 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:37,693 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:37,693 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:37,693 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:37,694 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:37,694 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:37,695 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:37,697 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:37,697 DEBUG [RS:0;jenkins-hbase4:40865] zookeeper.ReadOnlyZKClient(139): Connect 0x5b87e223 to 127.0.0.1:57231 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:37,698 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:37,701 DEBUG [RS:1;jenkins-hbase4:32841] zookeeper.ReadOnlyZKClient(139): Connect 0x66f06976 to 127.0.0.1:57231 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:37,701 DEBUG [RS:2;jenkins-hbase4:32847] zookeeper.ReadOnlyZKClient(139): Connect 0x6facd029 to 127.0.0.1:57231 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:37,710 DEBUG [RS:0;jenkins-hbase4:40865] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55400723, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:37,711 DEBUG [RS:0;jenkins-hbase4:40865] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@517f8953, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:37,711 DEBUG [RS:1;jenkins-hbase4:32841] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b47c7cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:37,711 DEBUG [RS:1;jenkins-hbase4:32841] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@575c08e2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:37,711 DEBUG [RS:2;jenkins-hbase4:32847] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2aae53b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:37,712 DEBUG [RS:2;jenkins-hbase4:32847] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a60abc7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:37,720 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:32847 2023-07-17 11:15:37,720 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40865 2023-07-17 11:15:37,720 INFO [RS:2;jenkins-hbase4:32847] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:37,720 INFO [RS:2;jenkins-hbase4:32847] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:37,720 INFO [RS:0;jenkins-hbase4:40865] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:37,720 INFO [RS:0;jenkins-hbase4:40865] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:37,720 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:37,720 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:37,720 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:32841 2023-07-17 11:15:37,721 INFO [RS:1;jenkins-hbase4:32841] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:37,721 INFO [RS:1;jenkins-hbase4:32841] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:37,721 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:37,721 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35293,1689592537329 with isa=jenkins-hbase4.apache.org/172.31.14.131:40865, startcode=1689592537384 2023-07-17 11:15:37,721 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35293,1689592537329 with isa=jenkins-hbase4.apache.org/172.31.14.131:32847, startcode=1689592537460 2023-07-17 11:15:37,721 DEBUG [RS:0;jenkins-hbase4:40865] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:37,721 DEBUG [RS:2;jenkins-hbase4:32847] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:37,721 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35293,1689592537329 with isa=jenkins-hbase4.apache.org/172.31.14.131:32841, startcode=1689592537425 2023-07-17 11:15:37,721 DEBUG [RS:1;jenkins-hbase4:32841] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:37,723 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43317, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:37,723 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56617, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:37,723 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55695, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:37,725 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35293] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,725 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:37,725 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-17 11:15:37,725 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35293] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,725 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:37,725 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b 2023-07-17 11:15:37,725 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-17 11:15:37,725 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35293] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,725 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35473 2023-07-17 11:15:37,726 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:37,726 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33089 2023-07-17 11:15:37,726 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-17 11:15:37,726 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b 2023-07-17 11:15:37,726 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b 2023-07-17 11:15:37,726 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35473 2023-07-17 11:15:37,726 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35473 2023-07-17 11:15:37,726 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33089 2023-07-17 11:15:37,726 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33089 2023-07-17 11:15:37,727 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:37,733 DEBUG [RS:2;jenkins-hbase4:32847] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,733 WARN [RS:2;jenkins-hbase4:32847] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:37,733 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32841,1689592537425] 2023-07-17 11:15:37,733 INFO [RS:2;jenkins-hbase4:32847] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:37,733 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32847,1689592537460] 2023-07-17 11:15:37,733 DEBUG [RS:1;jenkins-hbase4:32841] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,733 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,733 DEBUG [RS:0;jenkins-hbase4:40865] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,733 WARN [RS:1;jenkins-hbase4:32841] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:37,733 WARN [RS:0;jenkins-hbase4:40865] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:37,733 INFO [RS:1;jenkins-hbase4:32841] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:37,734 INFO [RS:0;jenkins-hbase4:40865] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:37,734 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,733 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40865,1689592537384] 2023-07-17 11:15:37,734 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,745 DEBUG [RS:1;jenkins-hbase4:32841] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,745 DEBUG [RS:2;jenkins-hbase4:32847] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,745 DEBUG [RS:1;jenkins-hbase4:32841] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,745 DEBUG [RS:2;jenkins-hbase4:32847] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,745 DEBUG [RS:0;jenkins-hbase4:40865] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,745 DEBUG [RS:1;jenkins-hbase4:32841] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,745 DEBUG [RS:2;jenkins-hbase4:32847] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,746 DEBUG [RS:0;jenkins-hbase4:40865] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,746 DEBUG [RS:0;jenkins-hbase4:40865] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,746 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:37,746 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:37,746 INFO [RS:1;jenkins-hbase4:32841] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:37,746 INFO [RS:2;jenkins-hbase4:32847] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:37,747 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:37,748 INFO [RS:0;jenkins-hbase4:40865] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:37,748 INFO [RS:1;jenkins-hbase4:32841] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:37,750 INFO [RS:1;jenkins-hbase4:32841] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:37,750 INFO [RS:0;jenkins-hbase4:40865] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:37,750 INFO [RS:1;jenkins-hbase4:32841] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,755 INFO [RS:0;jenkins-hbase4:40865] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:37,755 INFO [RS:2;jenkins-hbase4:32847] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:37,755 INFO [RS:0;jenkins-hbase4:40865] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,755 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:37,756 INFO [RS:2;jenkins-hbase4:32847] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:37,756 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:37,756 INFO [RS:2;jenkins-hbase4:32847] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,757 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:37,758 INFO [RS:1;jenkins-hbase4:32841] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,758 INFO [RS:0;jenkins-hbase4:40865] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,759 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 INFO [RS:2;jenkins-hbase4:32847] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,759 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:37,759 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,759 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:37,760 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:37,759 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:1;jenkins-hbase4:32841] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:2;jenkins-hbase4:32847] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,760 DEBUG [RS:0;jenkins-hbase4:40865] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:37,764 INFO [RS:1;jenkins-hbase4:32841] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,764 INFO [RS:1;jenkins-hbase4:32841] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,764 INFO [RS:1;jenkins-hbase4:32841] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,764 INFO [RS:2;jenkins-hbase4:32847] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,765 INFO [RS:2;jenkins-hbase4:32847] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,765 INFO [RS:0;jenkins-hbase4:40865] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,766 INFO [RS:2;jenkins-hbase4:32847] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,766 INFO [RS:0;jenkins-hbase4:40865] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,766 INFO [RS:0;jenkins-hbase4:40865] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,777 INFO [RS:2;jenkins-hbase4:32847] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:37,777 INFO [RS:2;jenkins-hbase4:32847] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32847,1689592537460-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,777 INFO [RS:0;jenkins-hbase4:40865] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:37,777 INFO [RS:0;jenkins-hbase4:40865] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40865,1689592537384-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,779 INFO [RS:1;jenkins-hbase4:32841] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:37,779 INFO [RS:1;jenkins-hbase4:32841] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32841,1689592537425-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:37,788 INFO [RS:2;jenkins-hbase4:32847] regionserver.Replication(203): jenkins-hbase4.apache.org,32847,1689592537460 started 2023-07-17 11:15:37,788 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32847,1689592537460, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32847, sessionid=0x10172fea3e40003 2023-07-17 11:15:37,788 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:37,788 DEBUG [RS:2;jenkins-hbase4:32847] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,788 DEBUG [RS:2;jenkins-hbase4:32847] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32847,1689592537460' 2023-07-17 11:15:37,788 DEBUG [RS:2;jenkins-hbase4:32847] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:37,788 INFO [RS:0;jenkins-hbase4:40865] regionserver.Replication(203): jenkins-hbase4.apache.org,40865,1689592537384 started 2023-07-17 11:15:37,788 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40865,1689592537384, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40865, sessionid=0x10172fea3e40001 2023-07-17 11:15:37,789 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:37,789 DEBUG [RS:2;jenkins-hbase4:32847] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:37,789 DEBUG [RS:0;jenkins-hbase4:40865] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,789 DEBUG [RS:0;jenkins-hbase4:40865] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40865,1689592537384' 2023-07-17 11:15:37,789 DEBUG [RS:0;jenkins-hbase4:40865] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:37,789 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:37,789 DEBUG [RS:0;jenkins-hbase4:40865] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:37,789 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:37,790 DEBUG [RS:2;jenkins-hbase4:32847] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:37,790 DEBUG [RS:2;jenkins-hbase4:32847] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32847,1689592537460' 2023-07-17 11:15:37,790 DEBUG [RS:2;jenkins-hbase4:32847] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:37,790 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:37,790 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:37,790 DEBUG [RS:0;jenkins-hbase4:40865] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:37,790 DEBUG [RS:2;jenkins-hbase4:32847] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:37,790 DEBUG [RS:0;jenkins-hbase4:40865] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40865,1689592537384' 2023-07-17 11:15:37,790 DEBUG [RS:0;jenkins-hbase4:40865] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:37,790 DEBUG [RS:2;jenkins-hbase4:32847] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:37,790 DEBUG [RS:0;jenkins-hbase4:40865] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:37,790 INFO [RS:2;jenkins-hbase4:32847] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:37,790 INFO [RS:2;jenkins-hbase4:32847] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:37,791 DEBUG [RS:0;jenkins-hbase4:40865] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:37,791 INFO [RS:0;jenkins-hbase4:40865] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:37,791 INFO [RS:0;jenkins-hbase4:40865] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:37,795 INFO [RS:1;jenkins-hbase4:32841] regionserver.Replication(203): jenkins-hbase4.apache.org,32841,1689592537425 started 2023-07-17 11:15:37,795 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32841,1689592537425, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32841, sessionid=0x10172fea3e40002 2023-07-17 11:15:37,795 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:37,795 DEBUG [RS:1;jenkins-hbase4:32841] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,795 DEBUG [RS:1;jenkins-hbase4:32841] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32841,1689592537425' 2023-07-17 11:15:37,795 DEBUG [RS:1;jenkins-hbase4:32841] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:37,796 DEBUG [RS:1;jenkins-hbase4:32841] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:37,796 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:37,796 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:37,796 DEBUG [RS:1;jenkins-hbase4:32841] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:37,796 DEBUG [RS:1;jenkins-hbase4:32841] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32841,1689592537425' 2023-07-17 11:15:37,796 DEBUG [RS:1;jenkins-hbase4:32841] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:37,797 DEBUG [RS:1;jenkins-hbase4:32841] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:37,797 DEBUG [RS:1;jenkins-hbase4:32841] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:37,797 INFO [RS:1;jenkins-hbase4:32841] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:37,797 INFO [RS:1;jenkins-hbase4:32841] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:37,840 DEBUG [jenkins-hbase4:35293] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-17 11:15:37,841 DEBUG [jenkins-hbase4:35293] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:37,841 DEBUG [jenkins-hbase4:35293] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:37,841 DEBUG [jenkins-hbase4:35293] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:37,841 DEBUG [jenkins-hbase4:35293] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:37,841 DEBUG [jenkins-hbase4:35293] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:37,842 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32841,1689592537425, state=OPENING 2023-07-17 11:15:37,844 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-17 11:15:37,845 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:37,845 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 11:15:37,845 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32841,1689592537425}] 2023-07-17 11:15:37,892 INFO [RS:2;jenkins-hbase4:32847] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32847%2C1689592537460, suffix=, logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32847,1689592537460, archiveDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs, maxLogs=32 2023-07-17 11:15:37,892 INFO [RS:0;jenkins-hbase4:40865] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40865%2C1689592537384, suffix=, logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,40865,1689592537384, archiveDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs, maxLogs=32 2023-07-17 11:15:37,899 INFO [RS:1;jenkins-hbase4:32841] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32841%2C1689592537425, suffix=, logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32841,1689592537425, archiveDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs, maxLogs=32 2023-07-17 11:15:37,912 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK] 2023-07-17 11:15:37,913 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK] 2023-07-17 11:15:37,913 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK] 2023-07-17 11:15:37,920 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK] 2023-07-17 11:15:37,920 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK] 2023-07-17 11:15:37,921 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK] 2023-07-17 11:15:37,925 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK] 2023-07-17 11:15:37,925 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK] 2023-07-17 11:15:37,925 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK] 2023-07-17 11:15:37,930 INFO [RS:0;jenkins-hbase4:40865] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,40865,1689592537384/jenkins-hbase4.apache.org%2C40865%2C1689592537384.1689592537895 2023-07-17 11:15:37,930 INFO [RS:2;jenkins-hbase4:32847] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32847,1689592537460/jenkins-hbase4.apache.org%2C32847%2C1689592537460.1689592537895 2023-07-17 11:15:37,930 DEBUG [RS:0;jenkins-hbase4:40865] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK], DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK], DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK]] 2023-07-17 11:15:37,931 DEBUG [RS:2;jenkins-hbase4:32847] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK], DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK], DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK]] 2023-07-17 11:15:37,931 INFO [RS:1;jenkins-hbase4:32841] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32841,1689592537425/jenkins-hbase4.apache.org%2C32841%2C1689592537425.1689592537899 2023-07-17 11:15:37,932 DEBUG [RS:1;jenkins-hbase4:32841] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK], DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK], DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK]] 2023-07-17 11:15:37,940 WARN [ReadOnlyZKClient-127.0.0.1:57231@0x658fd9e7] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-17 11:15:37,940 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:37,942 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33974, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:37,942 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32841] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:33974 deadline: 1689592597942, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:38,000 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:38,002 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:38,003 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:38,007 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-17 11:15:38,007 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:38,009 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32841%2C1689592537425.meta, suffix=.meta, logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32841,1689592537425, archiveDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs, maxLogs=32 2023-07-17 11:15:38,022 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK] 2023-07-17 11:15:38,022 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK] 2023-07-17 11:15:38,022 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK] 2023-07-17 11:15:38,025 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32841,1689592537425/jenkins-hbase4.apache.org%2C32841%2C1689592537425.meta.1689592538009.meta 2023-07-17 11:15:38,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK], DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK], DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK]] 2023-07-17 11:15:38,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:38,025 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 11:15:38,026 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-17 11:15:38,026 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-17 11:15:38,026 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-17 11:15:38,026 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:38,026 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-17 11:15:38,026 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-17 11:15:38,035 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-17 11:15:38,036 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/info 2023-07-17 11:15:38,036 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/info 2023-07-17 11:15:38,036 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-17 11:15:38,037 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:38,037 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-17 11:15:38,038 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:38,038 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/rep_barrier 2023-07-17 11:15:38,038 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-17 11:15:38,039 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:38,039 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-17 11:15:38,040 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/table 2023-07-17 11:15:38,040 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/table 2023-07-17 11:15:38,040 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-17 11:15:38,040 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:38,041 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740 2023-07-17 11:15:38,042 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740 2023-07-17 11:15:38,045 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-17 11:15:38,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-17 11:15:38,048 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9547197440, jitterRate=-0.11084794998168945}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-17 11:15:38,048 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-17 11:15:38,049 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689592538000 2023-07-17 11:15:38,056 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-17 11:15:38,057 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-17 11:15:38,057 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,32841,1689592537425, state=OPEN 2023-07-17 11:15:38,058 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-17 11:15:38,058 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-17 11:15:38,064 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-17 11:15:38,064 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,32841,1689592537425 in 213 msec 2023-07-17 11:15:38,066 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-17 11:15:38,066 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 376 msec 2023-07-17 11:15:38,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 439 msec 2023-07-17 11:15:38,068 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689592538068, completionTime=-1 2023-07-17 11:15:38,068 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-17 11:15:38,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-17 11:15:38,073 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-17 11:15:38,073 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689592598073 2023-07-17 11:15:38,073 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689592658073 2023-07-17 11:15:38,073 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-17 11:15:38,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35293,1689592537329-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35293,1689592537329-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35293,1689592537329-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35293, period=300000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-17 11:15:38,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:38,084 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-17 11:15:38,086 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-17 11:15:38,086 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:38,087 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:38,088 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,088 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501 empty. 2023-07-17 11:15:38,089 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,089 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-17 11:15:38,103 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:38,104 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0d62c7aed3c64e14669f4471870b3501, NAME => 'hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp 2023-07-17 11:15:38,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:38,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0d62c7aed3c64e14669f4471870b3501, disabling compactions & flushes 2023-07-17 11:15:38,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:38,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:38,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. after waiting 0 ms 2023-07-17 11:15:38,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:38,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:38,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0d62c7aed3c64e14669f4471870b3501: 2023-07-17 11:15:38,121 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:38,122 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592538122"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592538122"}]},"ts":"1689592538122"} 2023-07-17 11:15:38,125 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:38,125 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:38,126 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592538125"}]},"ts":"1689592538125"} 2023-07-17 11:15:38,127 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-17 11:15:38,130 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:38,130 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:38,130 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:38,130 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:38,130 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:38,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0d62c7aed3c64e14669f4471870b3501, ASSIGN}] 2023-07-17 11:15:38,132 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0d62c7aed3c64e14669f4471870b3501, ASSIGN 2023-07-17 11:15:38,133 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0d62c7aed3c64e14669f4471870b3501, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40865,1689592537384; forceNewPlan=false, retain=false 2023-07-17 11:15:38,246 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:38,248 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-17 11:15:38,249 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:38,250 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:38,252 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,252 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7 empty. 2023-07-17 11:15:38,253 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,253 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-17 11:15:38,268 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:38,269 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2b4777bce162636fdf4ee754f19471f7, NAME => 'hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp 2023-07-17 11:15:38,284 INFO [jenkins-hbase4:35293] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:38,285 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0d62c7aed3c64e14669f4471870b3501, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:38,285 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592538285"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592538285"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592538285"}]},"ts":"1689592538285"} 2023-07-17 11:15:38,286 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:38,287 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 2b4777bce162636fdf4ee754f19471f7, disabling compactions & flushes 2023-07-17 11:15:38,287 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:38,287 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:38,287 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. after waiting 0 ms 2023-07-17 11:15:38,287 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:38,287 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:38,287 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 2b4777bce162636fdf4ee754f19471f7: 2023-07-17 11:15:38,288 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 0d62c7aed3c64e14669f4471870b3501, server=jenkins-hbase4.apache.org,40865,1689592537384}] 2023-07-17 11:15:38,289 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:38,290 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592538290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592538290"}]},"ts":"1689592538290"} 2023-07-17 11:15:38,291 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:38,292 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:38,292 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592538292"}]},"ts":"1689592538292"} 2023-07-17 11:15:38,293 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-17 11:15:38,300 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:38,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:38,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:38,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:38,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:38,301 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2b4777bce162636fdf4ee754f19471f7, ASSIGN}] 2023-07-17 11:15:38,303 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2b4777bce162636fdf4ee754f19471f7, ASSIGN 2023-07-17 11:15:38,307 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2b4777bce162636fdf4ee754f19471f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32847,1689592537460; forceNewPlan=false, retain=false 2023-07-17 11:15:38,440 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:38,441 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:38,442 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:38,446 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:38,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d62c7aed3c64e14669f4471870b3501, NAME => 'hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:38,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:38,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,446 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,447 INFO [StoreOpener-0d62c7aed3c64e14669f4471870b3501-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,449 DEBUG [StoreOpener-0d62c7aed3c64e14669f4471870b3501-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/info 2023-07-17 11:15:38,449 DEBUG [StoreOpener-0d62c7aed3c64e14669f4471870b3501-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/info 2023-07-17 11:15:38,449 INFO [StoreOpener-0d62c7aed3c64e14669f4471870b3501-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d62c7aed3c64e14669f4471870b3501 columnFamilyName info 2023-07-17 11:15:38,450 INFO [StoreOpener-0d62c7aed3c64e14669f4471870b3501-1] regionserver.HStore(310): Store=0d62c7aed3c64e14669f4471870b3501/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:38,450 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,453 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:38,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:38,455 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0d62c7aed3c64e14669f4471870b3501; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11470323200, jitterRate=0.06825709342956543}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:38,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0d62c7aed3c64e14669f4471870b3501: 2023-07-17 11:15:38,456 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501., pid=7, masterSystemTime=1689592538440 2023-07-17 11:15:38,458 INFO [jenkins-hbase4:35293] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:38,459 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=2b4777bce162636fdf4ee754f19471f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:38,459 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592538459"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592538459"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592538459"}]},"ts":"1689592538459"} 2023-07-17 11:15:38,461 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:38,461 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:38,462 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0d62c7aed3c64e14669f4471870b3501, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:38,462 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689592538462"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592538462"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592538462"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592538462"}]},"ts":"1689592538462"} 2023-07-17 11:15:38,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 2b4777bce162636fdf4ee754f19471f7, server=jenkins-hbase4.apache.org,32847,1689592537460}] 2023-07-17 11:15:38,465 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-17 11:15:38,465 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 0d62c7aed3c64e14669f4471870b3501, server=jenkins-hbase4.apache.org,40865,1689592537384 in 175 msec 2023-07-17 11:15:38,467 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-17 11:15:38,467 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0d62c7aed3c64e14669f4471870b3501, ASSIGN in 334 msec 2023-07-17 11:15:38,468 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:38,468 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592538468"}]},"ts":"1689592538468"} 2023-07-17 11:15:38,469 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-17 11:15:38,471 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:38,472 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 387 msec 2023-07-17 11:15:38,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-17 11:15:38,487 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:38,487 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:38,489 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:38,491 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:38,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-17 11:15:38,508 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:38,510 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-07-17 11:15:38,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-17 11:15:38,520 DEBUG [PEWorker-2] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-17 11:15:38,520 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-17 11:15:38,615 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:38,616 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:38,617 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37868, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:38,621 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:38,621 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b4777bce162636fdf4ee754f19471f7, NAME => 'hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:38,621 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-17 11:15:38,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. service=MultiRowMutationService 2023-07-17 11:15:38,622 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-17 11:15:38,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:38,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,623 INFO [StoreOpener-2b4777bce162636fdf4ee754f19471f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,624 DEBUG [StoreOpener-2b4777bce162636fdf4ee754f19471f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/m 2023-07-17 11:15:38,624 DEBUG [StoreOpener-2b4777bce162636fdf4ee754f19471f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/m 2023-07-17 11:15:38,625 INFO [StoreOpener-2b4777bce162636fdf4ee754f19471f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b4777bce162636fdf4ee754f19471f7 columnFamilyName m 2023-07-17 11:15:38,625 INFO [StoreOpener-2b4777bce162636fdf4ee754f19471f7-1] regionserver.HStore(310): Store=2b4777bce162636fdf4ee754f19471f7/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:38,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:38,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:38,631 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b4777bce162636fdf4ee754f19471f7; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@49c56f94, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:38,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b4777bce162636fdf4ee754f19471f7: 2023-07-17 11:15:38,632 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7., pid=9, masterSystemTime=1689592538615 2023-07-17 11:15:38,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:38,636 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:38,636 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=2b4777bce162636fdf4ee754f19471f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:38,636 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689592538636"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592538636"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592538636"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592538636"}]},"ts":"1689592538636"} 2023-07-17 11:15:38,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-17 11:15:38,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 2b4777bce162636fdf4ee754f19471f7, server=jenkins-hbase4.apache.org,32847,1689592537460 in 175 msec 2023-07-17 11:15:38,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-17 11:15:38,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2b4777bce162636fdf4ee754f19471f7, ASSIGN in 338 msec 2023-07-17 11:15:38,647 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:38,650 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 133 msec 2023-07-17 11:15:38,651 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:38,651 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592538651"}]},"ts":"1689592538651"} 2023-07-17 11:15:38,652 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-17 11:15:38,656 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:38,657 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 410 msec 2023-07-17 11:15:38,661 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-17 11:15:38,664 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-17 11:15:38,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.161sec 2023-07-17 11:15:38,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-17 11:15:38,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-17 11:15:38,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-17 11:15:38,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35293,1689592537329-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-17 11:15:38,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35293,1689592537329-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-17 11:15:38,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-17 11:15:38,689 DEBUG [Listener at localhost/33721] zookeeper.ReadOnlyZKClient(139): Connect 0x10a31e76 to 127.0.0.1:57231 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:38,695 DEBUG [Listener at localhost/33721] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d87758a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:38,697 DEBUG [hconnection-0x796cfeff-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:38,699 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33994, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:38,701 INFO [Listener at localhost/33721] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:38,701 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:38,751 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:38,752 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:38,755 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-17 11:15:38,755 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-17 11:15:38,761 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:38,761 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:38,763 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 11:15:38,766 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-17 11:15:38,804 DEBUG [Listener at localhost/33721] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-17 11:15:38,806 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37560, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-17 11:15:38,809 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-17 11:15:38,809 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:38,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-17 11:15:38,810 DEBUG [Listener at localhost/33721] zookeeper.ReadOnlyZKClient(139): Connect 0x00cf5170 to 127.0.0.1:57231 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:38,816 DEBUG [Listener at localhost/33721] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ecadc90, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:38,816 INFO [Listener at localhost/33721] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57231 2023-07-17 11:15:38,819 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:38,820 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10172fea3e4000a connected 2023-07-17 11:15:38,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:38,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:38,827 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-17 11:15:38,839 INFO [Listener at localhost/33721] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-17 11:15:38,839 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:38,839 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:38,840 INFO [Listener at localhost/33721] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-17 11:15:38,840 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-17 11:15:38,840 INFO [Listener at localhost/33721] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-17 11:15:38,840 INFO [Listener at localhost/33721] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-17 11:15:38,840 INFO [Listener at localhost/33721] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32937 2023-07-17 11:15:38,841 INFO [Listener at localhost/33721] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-17 11:15:38,842 DEBUG [Listener at localhost/33721] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-17 11:15:38,843 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:38,844 INFO [Listener at localhost/33721] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-17 11:15:38,845 INFO [Listener at localhost/33721] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32937 connecting to ZooKeeper ensemble=127.0.0.1:57231 2023-07-17 11:15:38,848 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:329370x0, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-17 11:15:38,849 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(162): regionserver:329370x0, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-17 11:15:38,850 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32937-0x10172fea3e4000b connected 2023-07-17 11:15:38,851 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(162): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-17 11:15:38,851 DEBUG [Listener at localhost/33721] zookeeper.ZKUtil(164): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-17 11:15:38,853 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32937 2023-07-17 11:15:38,853 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32937 2023-07-17 11:15:38,853 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32937 2023-07-17 11:15:38,853 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32937 2023-07-17 11:15:38,853 DEBUG [Listener at localhost/33721] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32937 2023-07-17 11:15:38,855 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-17 11:15:38,855 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-17 11:15:38,855 INFO [Listener at localhost/33721] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-17 11:15:38,856 INFO [Listener at localhost/33721] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-17 11:15:38,856 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-17 11:15:38,856 INFO [Listener at localhost/33721] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-17 11:15:38,856 INFO [Listener at localhost/33721] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-17 11:15:38,856 INFO [Listener at localhost/33721] http.HttpServer(1146): Jetty bound to port 43209 2023-07-17 11:15:38,856 INFO [Listener at localhost/33721] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-17 11:15:38,860 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:38,861 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@13a4782d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,AVAILABLE} 2023-07-17 11:15:38,861 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:38,861 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@587bdcb4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-17 11:15:38,866 INFO [Listener at localhost/33721] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-17 11:15:38,867 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-17 11:15:38,867 INFO [Listener at localhost/33721] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-17 11:15:38,867 INFO [Listener at localhost/33721] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-17 11:15:38,868 INFO [Listener at localhost/33721] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-17 11:15:38,868 INFO [Listener at localhost/33721] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3b37ce6b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:38,869 INFO [Listener at localhost/33721] server.AbstractConnector(333): Started ServerConnector@1ae039d1{HTTP/1.1, (http/1.1)}{0.0.0.0:43209} 2023-07-17 11:15:38,870 INFO [Listener at localhost/33721] server.Server(415): Started @41268ms 2023-07-17 11:15:38,872 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(951): ClusterId : 324e1712-be1d-4769-b000-883702e9bf9e 2023-07-17 11:15:38,872 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-17 11:15:38,873 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-17 11:15:38,874 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-17 11:15:38,875 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-17 11:15:38,876 DEBUG [RS:3;jenkins-hbase4:32937] zookeeper.ReadOnlyZKClient(139): Connect 0x48bd9ead to 127.0.0.1:57231 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-17 11:15:38,882 DEBUG [RS:3;jenkins-hbase4:32937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@467a79ca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-17 11:15:38,882 DEBUG [RS:3;jenkins-hbase4:32937] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4245f5dd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:38,890 DEBUG [RS:3;jenkins-hbase4:32937] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:32937 2023-07-17 11:15:38,890 INFO [RS:3;jenkins-hbase4:32937] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-17 11:15:38,890 INFO [RS:3;jenkins-hbase4:32937] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-17 11:15:38,890 DEBUG [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1022): About to register with Master. 2023-07-17 11:15:38,891 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35293,1689592537329 with isa=jenkins-hbase4.apache.org/172.31.14.131:32937, startcode=1689592538839 2023-07-17 11:15:38,891 DEBUG [RS:3;jenkins-hbase4:32937] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-17 11:15:38,895 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33243, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-17 11:15:38,895 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35293] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,895 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-17 11:15:38,896 DEBUG [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b 2023-07-17 11:15:38,896 DEBUG [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35473 2023-07-17 11:15:38,896 DEBUG [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33089 2023-07-17 11:15:38,900 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:38,900 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:38,900 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:38,900 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:38,900 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:38,901 DEBUG [RS:3;jenkins-hbase4:32937] zookeeper.ZKUtil(162): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,901 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32937,1689592538839] 2023-07-17 11:15:38,901 WARN [RS:3;jenkins-hbase4:32937] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-17 11:15:38,901 INFO [RS:3;jenkins-hbase4:32937] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-17 11:15:38,901 DEBUG [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,901 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-17 11:15:38,901 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,901 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,901 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,902 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:38,905 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:38,905 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:38,905 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-17 11:15:38,907 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:38,908 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:38,908 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:38,908 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:38,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:38,909 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:38,910 DEBUG [RS:3;jenkins-hbase4:32937] zookeeper.ZKUtil(162): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,910 DEBUG [RS:3;jenkins-hbase4:32937] zookeeper.ZKUtil(162): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:38,911 DEBUG [RS:3;jenkins-hbase4:32937] zookeeper.ZKUtil(162): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:38,911 DEBUG [RS:3;jenkins-hbase4:32937] zookeeper.ZKUtil(162): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:38,912 DEBUG [RS:3;jenkins-hbase4:32937] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-17 11:15:38,912 INFO [RS:3;jenkins-hbase4:32937] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-17 11:15:38,913 INFO [RS:3;jenkins-hbase4:32937] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-17 11:15:38,914 INFO [RS:3;jenkins-hbase4:32937] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-17 11:15:38,914 INFO [RS:3;jenkins-hbase4:32937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,914 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-17 11:15:38,915 INFO [RS:3;jenkins-hbase4:32937] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,916 DEBUG [RS:3;jenkins-hbase4:32937] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-17 11:15:38,919 INFO [RS:3;jenkins-hbase4:32937] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,919 INFO [RS:3;jenkins-hbase4:32937] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,919 INFO [RS:3;jenkins-hbase4:32937] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,930 INFO [RS:3;jenkins-hbase4:32937] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-17 11:15:38,930 INFO [RS:3;jenkins-hbase4:32937] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32937,1689592538839-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-17 11:15:38,940 INFO [RS:3;jenkins-hbase4:32937] regionserver.Replication(203): jenkins-hbase4.apache.org,32937,1689592538839 started 2023-07-17 11:15:38,941 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32937,1689592538839, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32937, sessionid=0x10172fea3e4000b 2023-07-17 11:15:38,941 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-17 11:15:38,941 DEBUG [RS:3;jenkins-hbase4:32937] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,941 DEBUG [RS:3;jenkins-hbase4:32937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32937,1689592538839' 2023-07-17 11:15:38,941 DEBUG [RS:3;jenkins-hbase4:32937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-17 11:15:38,941 DEBUG [RS:3;jenkins-hbase4:32937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-17 11:15:38,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:38,942 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-17 11:15:38,942 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-17 11:15:38,942 DEBUG [RS:3;jenkins-hbase4:32937] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:38,942 DEBUG [RS:3;jenkins-hbase4:32937] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32937,1689592538839' 2023-07-17 11:15:38,942 DEBUG [RS:3;jenkins-hbase4:32937] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-17 11:15:38,943 DEBUG [RS:3;jenkins-hbase4:32937] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-17 11:15:38,943 DEBUG [RS:3;jenkins-hbase4:32937] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-17 11:15:38,943 INFO [RS:3;jenkins-hbase4:32937] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-17 11:15:38,943 INFO [RS:3;jenkins-hbase4:32937] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-17 11:15:38,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:38,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:38,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:38,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:38,949 DEBUG [hconnection-0x52be585-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:38,953 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34006, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:38,957 DEBUG [hconnection-0x52be585-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-17 11:15:38,958 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37890, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-17 11:15:38,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:38,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:38,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:38,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:38,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:37560 deadline: 1689593738962, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:38,963 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:38,964 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:38,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:38,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:38,965 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:38,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:38,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:39,014 INFO [Listener at localhost/33721] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=563 (was 504) Potentially hanging thread: RS:1;jenkins-hbase4:32841-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40865Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1930047144-2226-acceptor-0@c1de33e-ServerConnector@2cbc5b97{HTTP/1.1, (http/1.1)}{0.0.0.0:37393} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:32847 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x658fd9e7-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data3/current/BP-2000149786-172.31.14.131-1689592536481 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1718258822-2198 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp75035682-2536 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_368177631_17 at /127.0.0.1:37088 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1943377392_17 at /127.0.0.1:37050 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@58263a08 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x52be585-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1943377392_17 at /127.0.0.1:60198 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60132@0x4e7f4242-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp728904516-2269 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1664994548-2170 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728904516-2270-acceptor-0@7ba085ac-ServerConnector@25b5d85a{HTTP/1.1, (http/1.1)}{0.0.0.0:38385} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6a3dddad sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35473 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x48bd9ead-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35063 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728904516-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35063 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60132@0x4e7f4242-SendThread(127.0.0.1:60132) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182075320_17 at /127.0.0.1:38414 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33721 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@ff264a2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728904516-2266 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1664994548-2169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:35473 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182075320_17 at /127.0.0.1:37054 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728904516-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1718258822-2202 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_973022194_17 at /127.0.0.1:60246 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1664994548-2171 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1664994548-2168 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1234586889@qtp-250528231-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43129 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x658fd9e7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp75035682-2539 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x00cf5170-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4a6aeeb6[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35063 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1718258822-2200 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1943377392_17 at /127.0.0.1:38400 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1664994548-2165-acceptor-0@3b9c342f-ServerConnector@1c50d6e9{HTTP/1.1, (http/1.1)}{0.0.0.0:33089} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1664994548-2164 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60132@0x4e7f4242 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:35293 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/33721-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1930047144-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@3cc0e5fa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp315263978-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:40865 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x6facd029-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp75035682-2533 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:32847-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1664994548-2166 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1664994548-2167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data5/current/BP-2000149786-172.31.14.131-1689592536481 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182075320_17 at /127.0.0.1:37072 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:35063 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,39741,1689592532293 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x52be585-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: jenkins-hbase4:32937Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35063 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@54b131c0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x6facd029 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp315263978-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35473 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 361516287@qtp-1617779999-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Session-HouseKeeper-2f5a8a4b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x5b87e223-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@2fe94bae java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728904516-2268 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp75035682-2537 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b-prefix:jenkins-hbase4.apache.org,32841,1689592537425 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x5b87e223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(875279797) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@187959b1[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:35063 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@22be4a0b sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp728904516-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35293,1689592537329 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: IPC Server handler 4 on default port 33721 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/33721-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp315263978-2255 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35473 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x48bd9ead-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1718258822-2196-acceptor-0@265ba479-ServerConnector@75805bd3{HTTP/1.1, (http/1.1)}{0.0.0.0:35573} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1930047144-2230 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182075320_17 at /127.0.0.1:60282 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1930047144-2225 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b-prefix:jenkins-hbase4.apache.org,40865,1689592537384 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@727af871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x6facd029-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 40361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@70acdedc java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35473 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data1/current/BP-2000149786-172.31.14.131-1689592536481 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x48bd9ead sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:32937 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35473 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp315263978-2256-acceptor-0@784eb944-ServerConnector@32b1a200{HTTP/1.1, (http/1.1)}{0.0.0.0:42279} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-66ba58da-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp315263978-2261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-9f1cc1a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp315263978-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b-prefix:jenkins-hbase4.apache.org,32841,1689592537425.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_973022194_17 at /127.0.0.1:38382 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:32937-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1930047144-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40361 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x5b87e223-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 33721 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 1994651802@qtp-1256971785-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35063 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp75035682-2534-acceptor-0@45728e77-ServerConnector@1ae039d1{HTTP/1.1, (http/1.1)}{0.0.0.0:43209} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1276560029@qtp-1256971785-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45433 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x00cf5170 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_368177631_17 at /127.0.0.1:60286 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 40361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1718258822-2199 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33721 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592537650 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RS:0;jenkins-hbase4:40865-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1930047144-2231 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33427 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@29da407d sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_368177631_17 at /127.0.0.1:37056 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:35473 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@37da5ba7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1718258822-2197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data2/current/BP-2000149786-172.31.14.131-1689592536481 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728904516-2267 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:35473 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x66f06976-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 0 on default port 33721 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 2028003843@qtp-465465534-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43699 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp75035682-2540 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_368177631_17 at /127.0.0.1:38430 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x658fd9e7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:32847Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp75035682-2538 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35473 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x00cf5170-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x10a31e76 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x10a31e76-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/40211-SendThread(127.0.0.1:60132) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: qtp1930047144-2232 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35473 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35473 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1930047144-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data6/current/BP-2000149786-172.31.14.131-1689592536481 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp315263978-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b-prefix:jenkins-hbase4.apache.org,32847,1689592537460 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_368177631_17 at /127.0.0.1:38436 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1995487635@qtp-1617779999-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35987 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Server handler 1 on default port 40361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 40361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/33721-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (334980569) connection to localhost/127.0.0.1:35063 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 33427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ProcessThread(sid:0 cport:57231): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-535-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x796cfeff-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:35063 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x66f06976-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_973022194_17 at /127.0.0.1:37030 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x10a31e76-SendThread(127.0.0.1:57231) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 733763639@qtp-250528231-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data4/current/BP-2000149786-172.31.14.131-1689592536481 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:32841 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@72b5000f[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32841 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 40361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/33721-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:32841Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592537650 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 33721 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 35473 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/40211-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-537-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1718258822-2201 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@7a71fde4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1943377392_17 at /127.0.0.1:60278 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-59d4cd27-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32847 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57231@0x66f06976 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1917705229.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@4085c82a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33721-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 1797914875@qtp-465465534-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32937 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1718258822-2195 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1811768867.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40865 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/33721-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:57231 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: Session-HouseKeeper-52f634f0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp315263978-2262 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_368177631_17 at /127.0.0.1:60298 [Receiving block BP-2000149786-172.31.14.131-1689592536481:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp75035682-2535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData-prefix:jenkins-hbase4.apache.org,35293,1689592537329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-2000149786-172.31.14.131-1689592536481:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=832 (was 770) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=470 (was 484), ProcessCount=172 (was 172), AvailableMemoryMB=2811 (was 2938) 2023-07-17 11:15:39,017 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=563 is superior to 500 2023-07-17 11:15:39,035 INFO [Listener at localhost/33721] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=563, OpenFileDescriptor=832, MaxFileDescriptor=60000, SystemLoadAverage=470, ProcessCount=172, AvailableMemoryMB=2811 2023-07-17 11:15:39,035 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=563 is superior to 500 2023-07-17 11:15:39,036 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-17 11:15:39,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:39,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:39,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:39,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:39,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:39,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:39,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:39,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:39,045 INFO [RS:3;jenkins-hbase4:32937] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32937%2C1689592538839, suffix=, logDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32937,1689592538839, archiveDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs, maxLogs=32 2023-07-17 11:15:39,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:39,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:39,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:39,050 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:39,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:39,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:39,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:39,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:39,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:39,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:39,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:39,064 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK] 2023-07-17 11:15:39,064 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK] 2023-07-17 11:15:39,067 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK] 2023-07-17 11:15:39,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:39,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:39,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:37560 deadline: 1689593739067, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:39,068 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:39,071 INFO [RS:3;jenkins-hbase4:32937] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/WALs/jenkins-hbase4.apache.org,32937,1689592538839/jenkins-hbase4.apache.org%2C32937%2C1689592538839.1689592539046 2023-07-17 11:15:39,071 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:39,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:39,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:39,072 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:39,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:39,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:39,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:39,074 DEBUG [RS:3;jenkins-hbase4:32937] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34879,DS-35b57e0d-606a-455a-9013-50414e4940ce,DISK], DatanodeInfoWithStorage[127.0.0.1:42893,DS-2bbda707-8935-4d1f-b0ec-3fa0f32b8627,DISK], DatanodeInfoWithStorage[127.0.0.1:39583,DS-059260ba-dc4a-47f2-a714-cdfcaeec5081,DISK]] 2023-07-17 11:15:39,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-17 11:15:39,076 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:39,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-17 11:15:39,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:39,078 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:39,078 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:39,079 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:39,080 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-17 11:15:39,082 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,082 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff empty. 2023-07-17 11:15:39,083 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,083 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-17 11:15:39,097 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-17 11:15:39,099 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4abdadda4675565eac5ac185d77cb9ff, NAME => 't1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp 2023-07-17 11:15:39,110 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:39,110 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 4abdadda4675565eac5ac185d77cb9ff, disabling compactions & flushes 2023-07-17 11:15:39,110 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,110 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,110 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. after waiting 0 ms 2023-07-17 11:15:39,110 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,110 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,110 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 4abdadda4675565eac5ac185d77cb9ff: 2023-07-17 11:15:39,112 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-17 11:15:39,113 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592539113"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592539113"}]},"ts":"1689592539113"} 2023-07-17 11:15:39,114 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-17 11:15:39,115 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-17 11:15:39,115 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592539115"}]},"ts":"1689592539115"} 2023-07-17 11:15:39,116 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-17 11:15:39,119 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-17 11:15:39,119 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-17 11:15:39,119 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-17 11:15:39,119 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-17 11:15:39,119 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-17 11:15:39,119 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-17 11:15:39,119 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=4abdadda4675565eac5ac185d77cb9ff, ASSIGN}] 2023-07-17 11:15:39,120 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=4abdadda4675565eac5ac185d77cb9ff, ASSIGN 2023-07-17 11:15:39,120 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=4abdadda4675565eac5ac185d77cb9ff, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32937,1689592538839; forceNewPlan=false, retain=false 2023-07-17 11:15:39,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:39,271 INFO [jenkins-hbase4:35293] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-17 11:15:39,272 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4abdadda4675565eac5ac185d77cb9ff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:39,272 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592539272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592539272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592539272"}]},"ts":"1689592539272"} 2023-07-17 11:15:39,274 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 4abdadda4675565eac5ac185d77cb9ff, server=jenkins-hbase4.apache.org,32937,1689592538839}] 2023-07-17 11:15:39,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:39,426 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:39,427 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-17 11:15:39,428 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59948, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-17 11:15:39,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4abdadda4675565eac5ac185d77cb9ff, NAME => 't1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.', STARTKEY => '', ENDKEY => ''} 2023-07-17 11:15:39,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-17 11:15:39,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,433 INFO [StoreOpener-4abdadda4675565eac5ac185d77cb9ff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,435 DEBUG [StoreOpener-4abdadda4675565eac5ac185d77cb9ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/cf1 2023-07-17 11:15:39,435 DEBUG [StoreOpener-4abdadda4675565eac5ac185d77cb9ff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/cf1 2023-07-17 11:15:39,435 INFO [StoreOpener-4abdadda4675565eac5ac185d77cb9ff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4abdadda4675565eac5ac185d77cb9ff columnFamilyName cf1 2023-07-17 11:15:39,436 INFO [StoreOpener-4abdadda4675565eac5ac185d77cb9ff-1] regionserver.HStore(310): Store=4abdadda4675565eac5ac185d77cb9ff/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-17 11:15:39,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/default/t1/4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/default/t1/4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-17 11:15:39,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4abdadda4675565eac5ac185d77cb9ff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11484036160, jitterRate=0.06953421235084534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-17 11:15:39,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4abdadda4675565eac5ac185d77cb9ff: 2023-07-17 11:15:39,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff., pid=14, masterSystemTime=1689592539426 2023-07-17 11:15:39,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,448 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=4abdadda4675565eac5ac185d77cb9ff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:39,448 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592539448"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689592539448"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689592539448"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689592539448"}]},"ts":"1689592539448"} 2023-07-17 11:15:39,451 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-17 11:15:39,451 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 4abdadda4675565eac5ac185d77cb9ff, server=jenkins-hbase4.apache.org,32937,1689592538839 in 175 msec 2023-07-17 11:15:39,453 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-17 11:15:39,453 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=4abdadda4675565eac5ac185d77cb9ff, ASSIGN in 332 msec 2023-07-17 11:15:39,454 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-17 11:15:39,454 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592539454"}]},"ts":"1689592539454"} 2023-07-17 11:15:39,455 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-17 11:15:39,458 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-17 11:15:39,460 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 384 msec 2023-07-17 11:15:39,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-17 11:15:39,680 INFO [Listener at localhost/33721] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-17 11:15:39,680 DEBUG [Listener at localhost/33721] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-17 11:15:39,680 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:39,682 INFO [Listener at localhost/33721] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-17 11:15:39,682 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:39,682 INFO [Listener at localhost/33721] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-17 11:15:39,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-17 11:15:39,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-17 11:15:39,686 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-17 11:15:39,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-17 11:15:39,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:37560 deadline: 1689592599683, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-17 11:15:39,689 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:39,690 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-17 11:15:39,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:39,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:39,791 INFO [Listener at localhost/33721] client.HBaseAdmin$15(890): Started disable of t1 2023-07-17 11:15:39,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-17 11:15:39,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-17 11:15:39,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-17 11:15:39,795 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592539795"}]},"ts":"1689592539795"} 2023-07-17 11:15:39,796 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-17 11:15:39,798 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-17 11:15:39,799 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=4abdadda4675565eac5ac185d77cb9ff, UNASSIGN}] 2023-07-17 11:15:39,800 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=4abdadda4675565eac5ac185d77cb9ff, UNASSIGN 2023-07-17 11:15:39,800 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=4abdadda4675565eac5ac185d77cb9ff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:39,800 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592539800"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689592539800"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689592539800"}]},"ts":"1689592539800"} 2023-07-17 11:15:39,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 4abdadda4675565eac5ac185d77cb9ff, server=jenkins-hbase4.apache.org,32937,1689592538839}] 2023-07-17 11:15:39,820 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-17 11:15:39,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-17 11:15:39,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4abdadda4675565eac5ac185d77cb9ff, disabling compactions & flushes 2023-07-17 11:15:39,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. after waiting 0 ms 2023-07-17 11:15:39,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-17 11:15:39,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff. 2023-07-17 11:15:39,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4abdadda4675565eac5ac185d77cb9ff: 2023-07-17 11:15:39,965 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=4abdadda4675565eac5ac185d77cb9ff, regionState=CLOSED 2023-07-17 11:15:39,965 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689592539964"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689592539964"}]},"ts":"1689592539964"} 2023-07-17 11:15:39,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-17 11:15:39,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 4abdadda4675565eac5ac185d77cb9ff, server=jenkins-hbase4.apache.org,32937,1689592538839 in 164 msec 2023-07-17 11:15:39,969 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-17 11:15:39,969 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=4abdadda4675565eac5ac185d77cb9ff, UNASSIGN in 169 msec 2023-07-17 11:15:39,970 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689592539970"}]},"ts":"1689592539970"} 2023-07-17 11:15:39,974 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-17 11:15:39,975 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-17 11:15:39,975 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:39,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 184 msec 2023-07-17 11:15:40,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-17 11:15:40,099 INFO [Listener at localhost/33721] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-17 11:15:40,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-17 11:15:40,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-17 11:15:40,102 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-17 11:15:40,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-17 11:15:40,103 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-17 11:15:40,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,107 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:40,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 11:15:40,109 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/cf1, FileablePath, hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/recovered.edits] 2023-07-17 11:15:40,114 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/recovered.edits/4.seqid to hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/archive/data/default/t1/4abdadda4675565eac5ac185d77cb9ff/recovered.edits/4.seqid 2023-07-17 11:15:40,114 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/.tmp/data/default/t1/4abdadda4675565eac5ac185d77cb9ff 2023-07-17 11:15:40,114 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-17 11:15:40,116 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-17 11:15:40,118 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-17 11:15:40,119 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-17 11:15:40,120 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-17 11:15:40,120 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-17 11:15:40,120 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689592540120"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:40,122 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-17 11:15:40,122 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 4abdadda4675565eac5ac185d77cb9ff, NAME => 't1,,1689592539074.4abdadda4675565eac5ac185d77cb9ff.', STARTKEY => '', ENDKEY => ''}] 2023-07-17 11:15:40,122 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-17 11:15:40,122 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689592540122"}]},"ts":"9223372036854775807"} 2023-07-17 11:15:40,123 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-17 11:15:40,127 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-17 11:15:40,128 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 28 msec 2023-07-17 11:15:40,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-17 11:15:40,210 INFO [Listener at localhost/33721] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-17 11:15:40,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:40,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:40,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,228 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:40,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:40,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:40,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:37560 deadline: 1689593740246, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:40,247 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:40,251 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:40,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,253 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:40,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:40,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:40,293 INFO [Listener at localhost/33721] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 563) - Thread LEAK? -, OpenFileDescriptor=842 (was 832) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=455 (was 470), ProcessCount=172 (was 172), AvailableMemoryMB=2743 (was 2811) 2023-07-17 11:15:40,293 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-17 11:15:40,317 INFO [Listener at localhost/33721] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=455, ProcessCount=172, AvailableMemoryMB=2730 2023-07-17 11:15:40,317 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-17 11:15:40,317 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-17 11:15:40,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:40,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:40,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,330 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:40,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:40,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:40,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37560 deadline: 1689593740340, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:40,341 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:40,342 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:40,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,343 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:40,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:40,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:40,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-17 11:15:40,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:40,346 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-17 11:15:40,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-17 11:15:40,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-17 11:15:40,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:40,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:40,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,367 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:40,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:40,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:40,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37560 deadline: 1689593740377, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:40,378 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:40,380 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:40,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,381 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:40,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:40,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:40,403 INFO [Listener at localhost/33721] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572 (was 572), OpenFileDescriptor=840 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=455 (was 455), ProcessCount=172 (was 172), AvailableMemoryMB=2730 (was 2730) 2023-07-17 11:15:40,403 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-17 11:15:40,426 INFO [Listener at localhost/33721] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572, OpenFileDescriptor=840, MaxFileDescriptor=60000, SystemLoadAverage=455, ProcessCount=172, AvailableMemoryMB=2730 2023-07-17 11:15:40,426 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-17 11:15:40,426 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-17 11:15:40,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:40,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:40,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,441 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:40,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:40,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:40,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37560 deadline: 1689593740451, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:40,451 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:40,453 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:40,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,454 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:40,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:40,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:40,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:40,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:40,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,471 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:40,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:40,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:40,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37560 deadline: 1689593740480, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:40,480 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:40,482 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:40,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,483 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:40,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:40,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:40,504 INFO [Listener at localhost/33721] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573 (was 572) - Thread LEAK? -, OpenFileDescriptor=840 (was 840), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=455 (was 455), ProcessCount=172 (was 172), AvailableMemoryMB=2730 (was 2730) 2023-07-17 11:15:40,504 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-17 11:15:40,524 INFO [Listener at localhost/33721] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573, OpenFileDescriptor=840, MaxFileDescriptor=60000, SystemLoadAverage=455, ProcessCount=172, AvailableMemoryMB=2729 2023-07-17 11:15:40,524 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-17 11:15:40,524 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-17 11:15:40,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:40,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:40,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,537 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:40,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:40,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:40,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37560 deadline: 1689593740546, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:40,547 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:40,548 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:40,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,549 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:40,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:40,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:40,550 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-17 11:15:40,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-17 11:15:40,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-17 11:15:40,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-17 11:15:40,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-17 11:15:40,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 11:15:40,567 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:40,569 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-17 11:15:40,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-17 11:15:40,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-17 11:15:40,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:37560 deadline: 1689593740665, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-17 11:15:40,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-17 11:15:40,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-17 11:15:40,687 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-17 11:15:40,688 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-17 11:15:40,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-17 11:15:40,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-17 11:15:40,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-17 11:15:40,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-17 11:15:40,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-17 11:15:40,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-17 11:15:40,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,807 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,810 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-17 11:15:40,812 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,813 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-17 11:15:40,813 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-17 11:15:40,814 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,823 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-17 11:15:40,824 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 20 msec 2023-07-17 11:15:40,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-17 11:15:40,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-17 11:15:40,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-17 11:15:40,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-17 11:15:40,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:37560 deadline: 1689592600921, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-17 11:15:40,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-17 11:15:40,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-17 11:15:40,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-17 11:15:40,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-17 11:15:40,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-17 11:15:40,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-17 11:15:40,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-17 11:15:40,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-17 11:15:40,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-17 11:15:40,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-17 11:15:40,940 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-17 11:15:40,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-17 11:15:40,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-17 11:15:40,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-17 11:15:40,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-17 11:15:40,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-17 11:15:40,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35293] to rsgroup master 2023-07-17 11:15:40,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-17 11:15:40,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:37560 deadline: 1689593740948, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. 2023-07-17 11:15:40,949 WARN [Listener at localhost/33721] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:35293 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-17 11:15:40,951 INFO [Listener at localhost/33721] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-17 11:15:40,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-17 11:15:40,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-17 11:15:40,951 INFO [Listener at localhost/33721] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32841, jenkins-hbase4.apache.org:32847, jenkins-hbase4.apache.org:32937, jenkins-hbase4.apache.org:40865], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-17 11:15:40,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-17 11:15:40,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35293] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-17 11:15:40,970 INFO [Listener at localhost/33721] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573 (was 573), OpenFileDescriptor=840 (was 840), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=455 (was 455), ProcessCount=172 (was 172), AvailableMemoryMB=2729 (was 2729) 2023-07-17 11:15:40,970 WARN [Listener at localhost/33721] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-17 11:15:40,970 INFO [Listener at localhost/33721] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-17 11:15:40,970 INFO [Listener at localhost/33721] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-17 11:15:40,970 DEBUG [Listener at localhost/33721] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x10a31e76 to 127.0.0.1:57231 2023-07-17 11:15:40,970 DEBUG [Listener at localhost/33721] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:40,970 DEBUG [Listener at localhost/33721] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-17 11:15:40,970 DEBUG [Listener at localhost/33721] util.JVMClusterUtil(257): Found active master hash=8986248, stopped=false 2023-07-17 11:15:40,971 DEBUG [Listener at localhost/33721] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-17 11:15:40,971 DEBUG [Listener at localhost/33721] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-17 11:15:40,971 INFO [Listener at localhost/33721] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:40,974 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:40,974 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:40,974 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:40,974 INFO [Listener at localhost/33721] procedure2.ProcedureExecutor(629): Stopping 2023-07-17 11:15:40,974 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:40,974 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:40,974 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-17 11:15:40,974 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:40,974 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:40,974 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:40,974 DEBUG [Listener at localhost/33721] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x658fd9e7 to 127.0.0.1:57231 2023-07-17 11:15:40,975 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:40,975 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-17 11:15:40,975 DEBUG [Listener at localhost/33721] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:40,975 INFO [Listener at localhost/33721] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40865,1689592537384' ***** 2023-07-17 11:15:40,975 INFO [Listener at localhost/33721] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:40,975 INFO [Listener at localhost/33721] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32841,1689592537425' ***** 2023-07-17 11:15:40,975 INFO [Listener at localhost/33721] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:40,975 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:40,975 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:40,975 INFO [Listener at localhost/33721] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32847,1689592537460' ***** 2023-07-17 11:15:40,980 INFO [Listener at localhost/33721] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:40,980 INFO [Listener at localhost/33721] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32937,1689592538839' ***** 2023-07-17 11:15:40,980 INFO [Listener at localhost/33721] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-17 11:15:40,980 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:40,980 INFO [RS:1;jenkins-hbase4:32841] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4ed58f27{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:40,980 INFO [RS:0;jenkins-hbase4:40865] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6433591e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:40,980 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:40,983 INFO [RS:1;jenkins-hbase4:32841] server.AbstractConnector(383): Stopped ServerConnector@2cbc5b97{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:40,983 INFO [RS:0;jenkins-hbase4:40865] server.AbstractConnector(383): Stopped ServerConnector@75805bd3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:40,983 INFO [RS:2;jenkins-hbase4:32847] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@30b69b2e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:40,984 INFO [RS:0;jenkins-hbase4:40865] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:40,984 INFO [RS:1;jenkins-hbase4:32841] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:40,985 INFO [RS:3;jenkins-hbase4:32937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3b37ce6b{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-17 11:15:40,985 INFO [RS:0;jenkins-hbase4:40865] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6c9b85fb{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:40,985 INFO [RS:2;jenkins-hbase4:32847] server.AbstractConnector(383): Stopped ServerConnector@32b1a200{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:40,987 INFO [RS:2;jenkins-hbase4:32847] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:40,987 INFO [RS:3;jenkins-hbase4:32937] server.AbstractConnector(383): Stopped ServerConnector@1ae039d1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:40,988 INFO [RS:3;jenkins-hbase4:32937] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:40,987 INFO [RS:0;jenkins-hbase4:40865] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@585d37a7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:40,988 INFO [RS:2;jenkins-hbase4:32847] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b102eb4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:40,989 INFO [RS:3;jenkins-hbase4:32937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@587bdcb4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:40,987 INFO [RS:1;jenkins-hbase4:32841] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@631c36f1{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:40,990 INFO [RS:2;jenkins-hbase4:32847] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d0d8e68{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:40,991 INFO [RS:1;jenkins-hbase4:32841] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@319792b3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:40,991 INFO [RS:0;jenkins-hbase4:40865] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:40,991 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:40,992 INFO [RS:1;jenkins-hbase4:32841] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:40,992 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:40,991 INFO [RS:3;jenkins-hbase4:32937] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@13a4782d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:40,992 INFO [RS:1;jenkins-hbase4:32841] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:40,991 INFO [RS:0;jenkins-hbase4:40865] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:40,995 INFO [RS:0;jenkins-hbase4:40865] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:40,995 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(3305): Received CLOSE for 0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:40,995 INFO [RS:2;jenkins-hbase4:32847] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:40,995 INFO [RS:1;jenkins-hbase4:32841] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:41,001 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:41,001 DEBUG [RS:1;jenkins-hbase4:32841] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66f06976 to 127.0.0.1:57231 2023-07-17 11:15:41,001 DEBUG [RS:1;jenkins-hbase4:32841] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,001 INFO [RS:1;jenkins-hbase4:32841] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:41,001 INFO [RS:1;jenkins-hbase4:32841] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:41,001 INFO [RS:1;jenkins-hbase4:32841] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:41,001 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-17 11:15:41,010 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:41,010 DEBUG [RS:0;jenkins-hbase4:40865] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b87e223 to 127.0.0.1:57231 2023-07-17 11:15:41,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0d62c7aed3c64e14669f4471870b3501, disabling compactions & flushes 2023-07-17 11:15:41,010 DEBUG [RS:0;jenkins-hbase4:40865] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,010 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 11:15:41,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:41,010 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:41,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:41,010 INFO [RS:2;jenkins-hbase4:32847] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:41,011 INFO [RS:2;jenkins-hbase4:32847] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:41,011 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(3305): Received CLOSE for 2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:41,011 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-17 11:15:41,011 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-17 11:15:41,011 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-17 11:15:41,011 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-17 11:15:41,010 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-17 11:15:41,010 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 11:15:41,011 DEBUG [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-17 11:15:41,011 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1478): Online Regions={0d62c7aed3c64e14669f4471870b3501=hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501.} 2023-07-17 11:15:41,011 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:41,011 DEBUG [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1504): Waiting on 0d62c7aed3c64e14669f4471870b3501 2023-07-17 11:15:41,011 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-17 11:15:41,011 INFO [RS:3;jenkins-hbase4:32937] regionserver.HeapMemoryManager(220): Stopping 2023-07-17 11:15:41,011 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-17 11:15:41,012 INFO [RS:3;jenkins-hbase4:32937] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-17 11:15:41,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. after waiting 0 ms 2023-07-17 11:15:41,012 INFO [RS:3;jenkins-hbase4:32937] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-17 11:15:41,012 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-17 11:15:41,011 DEBUG [RS:2;jenkins-hbase4:32847] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6facd029 to 127.0.0.1:57231 2023-07-17 11:15:41,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b4777bce162636fdf4ee754f19471f7, disabling compactions & flushes 2023-07-17 11:15:41,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:41,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:41,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. after waiting 0 ms 2023-07-17 11:15:41,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:41,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2b4777bce162636fdf4ee754f19471f7 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-17 11:15:41,012 DEBUG [RS:2;jenkins-hbase4:32847] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,012 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-17 11:15:41,012 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1478): Online Regions={2b4777bce162636fdf4ee754f19471f7=hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7.} 2023-07-17 11:15:41,013 DEBUG [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1504): Waiting on 2b4777bce162636fdf4ee754f19471f7 2023-07-17 11:15:41,012 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:41,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:41,013 DEBUG [RS:3;jenkins-hbase4:32937] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x48bd9ead to 127.0.0.1:57231 2023-07-17 11:15:41,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0d62c7aed3c64e14669f4471870b3501 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-17 11:15:41,013 DEBUG [RS:3;jenkins-hbase4:32937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,013 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32937,1689592538839; all regions closed. 2023-07-17 11:15:41,025 DEBUG [RS:3;jenkins-hbase4:32937] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs 2023-07-17 11:15:41,025 INFO [RS:3;jenkins-hbase4:32937] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32937%2C1689592538839:(num 1689592539046) 2023-07-17 11:15:41,026 DEBUG [RS:3;jenkins-hbase4:32937] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,026 INFO [RS:3;jenkins-hbase4:32937] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,027 INFO [RS:3;jenkins-hbase4:32937] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:41,027 INFO [RS:3;jenkins-hbase4:32937] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:41,027 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:41,027 INFO [RS:3;jenkins-hbase4:32937] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:41,027 INFO [RS:3;jenkins-hbase4:32937] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:41,027 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,030 INFO [RS:3;jenkins-hbase4:32937] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32937 2023-07-17 11:15:41,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/.tmp/info/6e1d01c1c75042a28e94a770ec7c43ed 2023-07-17 11:15:41,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/.tmp/m/fc959f3a136e4e8991ab97a46fd439c0 2023-07-17 11:15:41,052 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/.tmp/info/2307dc0784504c03aff26822efa7e2d5 2023-07-17 11:15:41,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6e1d01c1c75042a28e94a770ec7c43ed 2023-07-17 11:15:41,059 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2307dc0784504c03aff26822efa7e2d5 2023-07-17 11:15:41,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fc959f3a136e4e8991ab97a46fd439c0 2023-07-17 11:15:41,060 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/.tmp/m/fc959f3a136e4e8991ab97a46fd439c0 as hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/m/fc959f3a136e4e8991ab97a46fd439c0 2023-07-17 11:15:41,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/.tmp/info/6e1d01c1c75042a28e94a770ec7c43ed as hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/info/6e1d01c1c75042a28e94a770ec7c43ed 2023-07-17 11:15:41,068 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6e1d01c1c75042a28e94a770ec7c43ed 2023-07-17 11:15:41,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fc959f3a136e4e8991ab97a46fd439c0 2023-07-17 11:15:41,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/info/6e1d01c1c75042a28e94a770ec7c43ed, entries=3, sequenceid=9, filesize=5.0 K 2023-07-17 11:15:41,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/m/fc959f3a136e4e8991ab97a46fd439c0, entries=12, sequenceid=29, filesize=5.4 K 2023-07-17 11:15:41,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 2b4777bce162636fdf4ee754f19471f7 in 60ms, sequenceid=29, compaction requested=false 2023-07-17 11:15:41,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 0d62c7aed3c64e14669f4471870b3501 in 59ms, sequenceid=9, compaction requested=false 2023-07-17 11:15:41,073 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,073 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,115 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/.tmp/rep_barrier/d158e2c2a8dc44d6bf394caef2e156d5 2023-07-17 11:15:41,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/namespace/0d62c7aed3c64e14669f4471870b3501/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32937,1689592538839 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:41,116 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:41,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/rsgroup/2b4777bce162636fdf4ee754f19471f7/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-17 11:15:41,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:41,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0d62c7aed3c64e14669f4471870b3501: 2023-07-17 11:15:41,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689592538083.0d62c7aed3c64e14669f4471870b3501. 2023-07-17 11:15:41,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:41,120 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:41,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b4777bce162636fdf4ee754f19471f7: 2023-07-17 11:15:41,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689592538246.2b4777bce162636fdf4ee754f19471f7. 2023-07-17 11:15:41,125 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d158e2c2a8dc44d6bf394caef2e156d5 2023-07-17 11:15:41,142 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/.tmp/table/0a4a5c9854654e0f8901873bbc70f773 2023-07-17 11:15:41,147 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a4a5c9854654e0f8901873bbc70f773 2023-07-17 11:15:41,148 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/.tmp/info/2307dc0784504c03aff26822efa7e2d5 as hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/info/2307dc0784504c03aff26822efa7e2d5 2023-07-17 11:15:41,154 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2307dc0784504c03aff26822efa7e2d5 2023-07-17 11:15:41,154 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/info/2307dc0784504c03aff26822efa7e2d5, entries=22, sequenceid=26, filesize=7.3 K 2023-07-17 11:15:41,155 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/.tmp/rep_barrier/d158e2c2a8dc44d6bf394caef2e156d5 as hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/rep_barrier/d158e2c2a8dc44d6bf394caef2e156d5 2023-07-17 11:15:41,160 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d158e2c2a8dc44d6bf394caef2e156d5 2023-07-17 11:15:41,160 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/rep_barrier/d158e2c2a8dc44d6bf394caef2e156d5, entries=1, sequenceid=26, filesize=4.9 K 2023-07-17 11:15:41,161 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/.tmp/table/0a4a5c9854654e0f8901873bbc70f773 as hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/table/0a4a5c9854654e0f8901873bbc70f773 2023-07-17 11:15:41,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a4a5c9854654e0f8901873bbc70f773 2023-07-17 11:15:41,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/table/0a4a5c9854654e0f8901873bbc70f773, entries=6, sequenceid=26, filesize=5.1 K 2023-07-17 11:15:41,167 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 156ms, sequenceid=26, compaction requested=false 2023-07-17 11:15:41,176 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-17 11:15:41,176 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-17 11:15:41,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:41,177 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-17 11:15:41,177 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-17 11:15:41,211 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32841,1689592537425; all regions closed. 2023-07-17 11:15:41,211 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40865,1689592537384; all regions closed. 2023-07-17 11:15:41,213 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32847,1689592537460; all regions closed. 2023-07-17 11:15:41,216 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32937,1689592538839] 2023-07-17 11:15:41,216 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32937,1689592538839; numProcessing=1 2023-07-17 11:15:41,219 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32937,1689592538839 already deleted, retry=false 2023-07-17 11:15:41,219 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32937,1689592538839 expired; onlineServers=3 2023-07-17 11:15:41,220 DEBUG [RS:0;jenkins-hbase4:40865] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs 2023-07-17 11:15:41,220 INFO [RS:0;jenkins-hbase4:40865] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40865%2C1689592537384:(num 1689592537895) 2023-07-17 11:15:41,220 DEBUG [RS:0;jenkins-hbase4:40865] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,221 DEBUG [RS:1;jenkins-hbase4:32841] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs 2023-07-17 11:15:41,221 INFO [RS:0;jenkins-hbase4:40865] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,221 INFO [RS:1;jenkins-hbase4:32841] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32841%2C1689592537425.meta:.meta(num 1689592538009) 2023-07-17 11:15:41,221 INFO [RS:0;jenkins-hbase4:40865] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:41,221 INFO [RS:0;jenkins-hbase4:40865] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:41,221 INFO [RS:0;jenkins-hbase4:40865] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:41,221 INFO [RS:0;jenkins-hbase4:40865] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:41,221 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:41,221 DEBUG [RS:2;jenkins-hbase4:32847] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs 2023-07-17 11:15:41,221 INFO [RS:2;jenkins-hbase4:32847] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32847%2C1689592537460:(num 1689592537895) 2023-07-17 11:15:41,221 DEBUG [RS:2;jenkins-hbase4:32847] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,222 INFO [RS:2;jenkins-hbase4:32847] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,222 INFO [RS:0;jenkins-hbase4:40865] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40865 2023-07-17 11:15:41,223 INFO [RS:2;jenkins-hbase4:32847] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:41,223 INFO [RS:2;jenkins-hbase4:32847] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-17 11:15:41,223 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:41,223 INFO [RS:2;jenkins-hbase4:32847] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-17 11:15:41,223 INFO [RS:2;jenkins-hbase4:32847] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-17 11:15:41,224 INFO [RS:2;jenkins-hbase4:32847] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32847 2023-07-17 11:15:41,232 DEBUG [RS:1;jenkins-hbase4:32841] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/oldWALs 2023-07-17 11:15:41,232 INFO [RS:1;jenkins-hbase4:32841] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32841%2C1689592537425:(num 1689592537899) 2023-07-17 11:15:41,232 DEBUG [RS:1;jenkins-hbase4:32841] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,232 INFO [RS:1;jenkins-hbase4:32841] regionserver.LeaseManager(133): Closed leases 2023-07-17 11:15:41,232 INFO [RS:1;jenkins-hbase4:32841] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-17 11:15:41,232 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:41,233 INFO [RS:1;jenkins-hbase4:32841] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32841 2023-07-17 11:15:41,318 INFO [RS:3;jenkins-hbase4:32937] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32937,1689592538839; zookeeper connection closed. 2023-07-17 11:15:41,318 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,319 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32937-0x10172fea3e4000b, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,319 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6d1e8911] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6d1e8911 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32841,1689592537425 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40865,1689592537384 2023-07-17 11:15:41,320 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32847,1689592537460 2023-07-17 11:15:41,321 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32841,1689592537425] 2023-07-17 11:15:41,321 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32841,1689592537425; numProcessing=2 2023-07-17 11:15:41,323 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32841,1689592537425 already deleted, retry=false 2023-07-17 11:15:41,323 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32841,1689592537425 expired; onlineServers=2 2023-07-17 11:15:41,324 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32847,1689592537460] 2023-07-17 11:15:41,324 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32847,1689592537460; numProcessing=3 2023-07-17 11:15:41,325 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32847,1689592537460 already deleted, retry=false 2023-07-17 11:15:41,325 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32847,1689592537460 expired; onlineServers=1 2023-07-17 11:15:41,325 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40865,1689592537384] 2023-07-17 11:15:41,325 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40865,1689592537384; numProcessing=4 2023-07-17 11:15:41,326 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40865,1689592537384 already deleted, retry=false 2023-07-17 11:15:41,326 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40865,1689592537384 expired; onlineServers=0 2023-07-17 11:15:41,326 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35293,1689592537329' ***** 2023-07-17 11:15:41,326 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-17 11:15:41,326 DEBUG [M:0;jenkins-hbase4:35293] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b69a41, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-17 11:15:41,326 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-17 11:15:41,329 INFO [M:0;jenkins-hbase4:35293] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@26f18ca9{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-17 11:15:41,329 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-17 11:15:41,330 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-17 11:15:41,330 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-17 11:15:41,330 INFO [M:0;jenkins-hbase4:35293] server.AbstractConnector(383): Stopped ServerConnector@1c50d6e9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:41,330 INFO [M:0;jenkins-hbase4:35293] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-17 11:15:41,331 INFO [M:0;jenkins-hbase4:35293] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d808172{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-17 11:15:41,331 INFO [M:0;jenkins-hbase4:35293] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e814646{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/hadoop.log.dir/,STOPPED} 2023-07-17 11:15:41,332 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35293,1689592537329 2023-07-17 11:15:41,332 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35293,1689592537329; all regions closed. 2023-07-17 11:15:41,332 DEBUG [M:0;jenkins-hbase4:35293] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-17 11:15:41,332 INFO [M:0;jenkins-hbase4:35293] master.HMaster(1491): Stopping master jetty server 2023-07-17 11:15:41,333 INFO [M:0;jenkins-hbase4:35293] server.AbstractConnector(383): Stopped ServerConnector@25b5d85a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-17 11:15:41,333 DEBUG [M:0;jenkins-hbase4:35293] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-17 11:15:41,333 DEBUG [M:0;jenkins-hbase4:35293] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-17 11:15:41,333 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-17 11:15:41,333 INFO [M:0;jenkins-hbase4:35293] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-17 11:15:41,333 INFO [M:0;jenkins-hbase4:35293] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-17 11:15:41,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592537650] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689592537650,5,FailOnTimeoutGroup] 2023-07-17 11:15:41,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592537650] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689592537650,5,FailOnTimeoutGroup] 2023-07-17 11:15:41,333 INFO [M:0;jenkins-hbase4:35293] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-17 11:15:41,333 DEBUG [M:0;jenkins-hbase4:35293] master.HMaster(1512): Stopping service threads 2023-07-17 11:15:41,333 INFO [M:0;jenkins-hbase4:35293] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-17 11:15:41,334 ERROR [M:0;jenkins-hbase4:35293] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-17 11:15:41,334 INFO [M:0;jenkins-hbase4:35293] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-17 11:15:41,334 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-17 11:15:41,334 DEBUG [M:0;jenkins-hbase4:35293] zookeeper.ZKUtil(398): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-17 11:15:41,334 WARN [M:0;jenkins-hbase4:35293] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-17 11:15:41,334 INFO [M:0;jenkins-hbase4:35293] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-17 11:15:41,334 INFO [M:0;jenkins-hbase4:35293] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-17 11:15:41,334 DEBUG [M:0;jenkins-hbase4:35293] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-17 11:15:41,335 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:41,335 DEBUG [M:0;jenkins-hbase4:35293] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:41,335 DEBUG [M:0;jenkins-hbase4:35293] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-17 11:15:41,335 DEBUG [M:0;jenkins-hbase4:35293] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:41,335 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.20 KB heapSize=90.66 KB 2023-07-17 11:15:41,349 INFO [M:0;jenkins-hbase4:35293] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.20 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/abacb2ae02b542ff9e12d955e70ac53b 2023-07-17 11:15:41,355 DEBUG [M:0;jenkins-hbase4:35293] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/abacb2ae02b542ff9e12d955e70ac53b as hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/abacb2ae02b542ff9e12d955e70ac53b 2023-07-17 11:15:41,360 INFO [M:0;jenkins-hbase4:35293] regionserver.HStore(1080): Added hdfs://localhost:35473/user/jenkins/test-data/1c20e0e4-e5a0-9ede-ceba-95bc6751880b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/abacb2ae02b542ff9e12d955e70ac53b, entries=22, sequenceid=175, filesize=11.1 K 2023-07-17 11:15:41,361 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegion(2948): Finished flush of dataSize ~76.20 KB/78030, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=175, compaction requested=false 2023-07-17 11:15:41,365 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-17 11:15:41,365 DEBUG [M:0;jenkins-hbase4:35293] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-17 11:15:41,369 INFO [M:0;jenkins-hbase4:35293] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-17 11:15:41,369 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-17 11:15:41,370 INFO [M:0;jenkins-hbase4:35293] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35293 2023-07-17 11:15:41,371 DEBUG [M:0;jenkins-hbase4:35293] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35293,1689592537329 already deleted, retry=false 2023-07-17 11:15:41,474 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,474 INFO [M:0;jenkins-hbase4:35293] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35293,1689592537329; zookeeper connection closed. 2023-07-17 11:15:41,474 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): master:35293-0x10172fea3e40000, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,574 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,574 INFO [RS:0;jenkins-hbase4:40865] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40865,1689592537384; zookeeper connection closed. 2023-07-17 11:15:41,574 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:40865-0x10172fea3e40001, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,575 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@25878a0f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@25878a0f 2023-07-17 11:15:41,675 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,675 INFO [RS:1;jenkins-hbase4:32841] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32841,1689592537425; zookeeper connection closed. 2023-07-17 11:15:41,675 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32841-0x10172fea3e40002, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,675 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4287fbcb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4287fbcb 2023-07-17 11:15:41,775 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,775 INFO [RS:2;jenkins-hbase4:32847] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32847,1689592537460; zookeeper connection closed. 2023-07-17 11:15:41,775 DEBUG [Listener at localhost/33721-EventThread] zookeeper.ZKWatcher(600): regionserver:32847-0x10172fea3e40003, quorum=127.0.0.1:57231, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-17 11:15:41,775 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@78e20fcc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@78e20fcc 2023-07-17 11:15:41,775 INFO [Listener at localhost/33721] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-17 11:15:41,776 WARN [Listener at localhost/33721] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:41,779 INFO [Listener at localhost/33721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:41,883 WARN [BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:41,884 WARN [BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2000149786-172.31.14.131-1689592536481 (Datanode Uuid 5c71ddd5-7b68-4d56-b9ea-2cad1c8ea6e6) service to localhost/127.0.0.1:35473 2023-07-17 11:15:41,884 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data5/current/BP-2000149786-172.31.14.131-1689592536481] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:41,884 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data6/current/BP-2000149786-172.31.14.131-1689592536481] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:41,886 WARN [Listener at localhost/33721] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:41,889 INFO [Listener at localhost/33721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:41,992 WARN [BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:41,992 WARN [BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2000149786-172.31.14.131-1689592536481 (Datanode Uuid f5374cce-db2a-4335-95cf-460dc7ce1306) service to localhost/127.0.0.1:35473 2023-07-17 11:15:41,993 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data3/current/BP-2000149786-172.31.14.131-1689592536481] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:41,993 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data4/current/BP-2000149786-172.31.14.131-1689592536481] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:41,994 WARN [Listener at localhost/33721] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-17 11:15:41,998 INFO [Listener at localhost/33721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:42,101 WARN [BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-17 11:15:42,101 WARN [BP-2000149786-172.31.14.131-1689592536481 heartbeating to localhost/127.0.0.1:35473] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2000149786-172.31.14.131-1689592536481 (Datanode Uuid 659054a7-3fdf-4d87-9a5c-31929022026e) service to localhost/127.0.0.1:35473 2023-07-17 11:15:42,102 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data1/current/BP-2000149786-172.31.14.131-1689592536481] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:42,103 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e4f4a903-c5b5-3e76-c76e-62a71f0612cc/cluster_7f27a7fe-51d2-c427-02ab-5b6cae838d2a/dfs/data/data2/current/BP-2000149786-172.31.14.131-1689592536481] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-17 11:15:42,112 INFO [Listener at localhost/33721] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-17 11:15:42,228 INFO [Listener at localhost/33721] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-17 11:15:42,259 INFO [Listener at localhost/33721] hbase.HBaseTestingUtility(1293): Minicluster is down