2023-11-27 04:59:30,774 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle timeout: 13 mins 2023-11-27 04:59:30,995 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765 2023-11-27 04:59:31,007 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=2, rsPorts=, rsClass=null, numDataNodes=2, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-11-27 04:59:31,008 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762, deleteOnExit=true 2023-11-27 04:59:31,008 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-11-27 04:59:31,008 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/test.cache.data in system properties and HBase conf 2023-11-27 04:59:31,009 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/hadoop.tmp.dir in system properties and HBase conf 2023-11-27 04:59:31,009 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/hadoop.log.dir in system properties and HBase conf 2023-11-27 04:59:31,009 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/mapreduce.cluster.local.dir in system properties and HBase conf 2023-11-27 04:59:31,010 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-11-27 04:59:31,010 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-11-27 04:59:31,119 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-11-27 04:59:31,510 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-11-27 04:59:31,515 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-11-27 04:59:31,516 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-11-27 04:59:31,516 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-11-27 04:59:31,517 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-11-27 04:59:31,517 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-11-27 04:59:31,518 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-11-27 04:59:31,518 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-11-27 04:59:31,519 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/dfs.journalnode.edits.dir in system properties and HBase conf 2023-11-27 04:59:31,519 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-11-27 04:59:31,520 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/nfs.dump.dir in system properties and HBase conf 2023-11-27 04:59:31,520 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/java.io.tmpdir in system properties and HBase conf 2023-11-27 04:59:31,521 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/dfs.journalnode.edits.dir in system properties and HBase conf 2023-11-27 04:59:31,521 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-11-27 04:59:31,521 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-11-27 04:59:32,027 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-11-27 04:59:32,032 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-11-27 04:59:32,315 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-11-27 04:59:32,474 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-11-27 04:59:32,491 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-11-27 04:59:32,526 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-11-27 04:59:32,559 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/java.io.tmpdir/Jetty_localhost_36915_hdfs____.2syph8/webapp 2023-11-27 04:59:32,684 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36915 2023-11-27 04:59:32,694 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-11-27 04:59:32,694 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-11-27 04:59:33,137 WARN [Listener at localhost/41015] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-11-27 04:59:33,213 WARN [Listener at localhost/41015] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-11-27 04:59:33,232 WARN [Listener at localhost/41015] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-11-27 04:59:33,239 INFO [Listener at localhost/41015] log.Slf4jLog(67): jetty-6.1.26 2023-11-27 04:59:33,243 INFO [Listener at localhost/41015] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/java.io.tmpdir/Jetty_localhost_36261_datanode____5z407l/webapp 2023-11-27 04:59:33,341 INFO [Listener at localhost/41015] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36261 2023-11-27 04:59:33,642 WARN [Listener at localhost/45283] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-11-27 04:59:33,655 WARN [Listener at localhost/45283] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-11-27 04:59:33,664 WARN [Listener at localhost/45283] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-11-27 04:59:33,666 INFO [Listener at localhost/45283] log.Slf4jLog(67): jetty-6.1.26 2023-11-27 04:59:33,673 INFO [Listener at localhost/45283] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/java.io.tmpdir/Jetty_localhost_38813_datanode____spxcn0/webapp 2023-11-27 04:59:33,777 INFO [Listener at localhost/45283] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38813 2023-11-27 04:59:33,787 WARN [Listener at localhost/34689] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-11-27 04:59:34,103 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5891443e6e536214: Processing first storage report for DS-8e8c4241-d25c-4419-8408-7d31707c4cd1 from datanode bc6defe3-ddb7-41a8-9f9f-c1c82a0e91b4 2023-11-27 04:59:34,105 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5891443e6e536214: from storage DS-8e8c4241-d25c-4419-8408-7d31707c4cd1 node DatanodeRegistration(127.0.0.1:35723, datanodeUuid=bc6defe3-ddb7-41a8-9f9f-c1c82a0e91b4, infoPort=37599, infoSecurePort=0, ipcPort=34689, storageInfo=lv=-57;cid=testClusterID;nsid=1503096511;c=1701061172104), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-11-27 04:59:34,105 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5524a6e92ff847f7: Processing first storage report for DS-ee024098-c42c-48f3-a34a-47e38fee1b14 from datanode 2c11d29c-8eed-4c69-99ab-c1ad7d4faff2 2023-11-27 04:59:34,106 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5524a6e92ff847f7: from storage DS-ee024098-c42c-48f3-a34a-47e38fee1b14 node DatanodeRegistration(127.0.0.1:40543, datanodeUuid=2c11d29c-8eed-4c69-99ab-c1ad7d4faff2, infoPort=44915, infoSecurePort=0, ipcPort=45283, storageInfo=lv=-57;cid=testClusterID;nsid=1503096511;c=1701061172104), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-11-27 04:59:34,106 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5891443e6e536214: Processing first storage report for DS-9a01c428-e624-423a-905a-164e1f0d283f from datanode bc6defe3-ddb7-41a8-9f9f-c1c82a0e91b4 2023-11-27 04:59:34,106 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5891443e6e536214: from storage DS-9a01c428-e624-423a-905a-164e1f0d283f node DatanodeRegistration(127.0.0.1:35723, datanodeUuid=bc6defe3-ddb7-41a8-9f9f-c1c82a0e91b4, infoPort=37599, infoSecurePort=0, ipcPort=34689, storageInfo=lv=-57;cid=testClusterID;nsid=1503096511;c=1701061172104), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-11-27 04:59:34,106 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5524a6e92ff847f7: Processing first storage report for DS-e550cb0f-a7fe-44aa-89d5-9ef7ea7c68a4 from datanode 2c11d29c-8eed-4c69-99ab-c1ad7d4faff2 2023-11-27 04:59:34,106 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5524a6e92ff847f7: from storage DS-e550cb0f-a7fe-44aa-89d5-9ef7ea7c68a4 node DatanodeRegistration(127.0.0.1:40543, datanodeUuid=2c11d29c-8eed-4c69-99ab-c1ad7d4faff2, infoPort=44915, infoSecurePort=0, ipcPort=45283, storageInfo=lv=-57;cid=testClusterID;nsid=1503096511;c=1701061172104), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-11-27 04:59:34,169 DEBUG [Listener at localhost/34689] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765 2023-11-27 04:59:34,276 INFO [Listener at localhost/34689] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762/zookeeper_0, clientPort=50029, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762/zookeeper_0/version-2, dataDirSize=457 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762/zookeeper_0/version-2, dataLogSize=457 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, clientPortListenBacklog=-1, serverId=0 2023-11-27 04:59:34,290 INFO [Listener at localhost/34689] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50029 2023-11-27 04:59:34,301 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:34,304 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:34,978 INFO [Listener at localhost/34689] util.FSUtils(471): Created version file at hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0 with version=8 2023-11-27 04:59:34,978 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/hbase-staging 2023-11-27 04:59:35,290 INFO [Listener at localhost/34689] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-11-27 04:59:35,743 INFO [Listener at localhost/34689] client.ConnectionUtils(126): master/jenkins-hbase4:0 server-side Connection retries=45 2023-11-27 04:59:35,774 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:35,775 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:35,775 INFO [Listener at localhost/34689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-11-27 04:59:35,775 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:35,775 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-11-27 04:59:35,919 INFO [Listener at localhost/34689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-11-27 04:59:35,990 DEBUG [Listener at localhost/34689] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-11-27 04:59:36,084 INFO [Listener at localhost/34689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33323 2023-11-27 04:59:36,093 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:36,095 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:36,119 INFO [Listener at localhost/34689] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33323 connecting to ZooKeeper ensemble=127.0.0.1:50029 2023-11-27 04:59:36,159 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:333230x0, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-11-27 04:59:36,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33323-0x1002d35880e0000 connected 2023-11-27 04:59:36,193 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-11-27 04:59:36,194 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-11-27 04:59:36,197 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-11-27 04:59:36,205 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33323 2023-11-27 04:59:36,205 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33323 2023-11-27 04:59:36,205 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33323 2023-11-27 04:59:36,206 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33323 2023-11-27 04:59:36,206 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33323 2023-11-27 04:59:36,212 INFO [Listener at localhost/34689] master.HMaster(444): hbase.rootdir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0, hbase.cluster.distributed=false 2023-11-27 04:59:36,280 INFO [Listener at localhost/34689] client.ConnectionUtils(126): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-11-27 04:59:36,280 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:36,280 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:36,280 INFO [Listener at localhost/34689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-11-27 04:59:36,281 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:36,281 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-11-27 04:59:36,285 INFO [Listener at localhost/34689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-11-27 04:59:36,288 INFO [Listener at localhost/34689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41853 2023-11-27 04:59:36,290 INFO [Listener at localhost/34689] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-11-27 04:59:36,296 DEBUG [Listener at localhost/34689] mob.MobFileCache(121): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-11-27 04:59:36,297 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:36,299 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:36,301 INFO [Listener at localhost/34689] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41853 connecting to ZooKeeper ensemble=127.0.0.1:50029 2023-11-27 04:59:36,305 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:418530x0, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-11-27 04:59:36,306 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41853-0x1002d35880e0001 connected 2023-11-27 04:59:36,306 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-11-27 04:59:36,307 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-11-27 04:59:36,308 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-11-27 04:59:36,309 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41853 2023-11-27 04:59:36,309 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41853 2023-11-27 04:59:36,309 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41853 2023-11-27 04:59:36,310 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41853 2023-11-27 04:59:36,310 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41853 2023-11-27 04:59:36,323 INFO [Listener at localhost/34689] client.ConnectionUtils(126): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-11-27 04:59:36,323 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:36,323 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:36,324 INFO [Listener at localhost/34689] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-11-27 04:59:36,324 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-11-27 04:59:36,324 INFO [Listener at localhost/34689] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-11-27 04:59:36,324 INFO [Listener at localhost/34689] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-11-27 04:59:36,326 INFO [Listener at localhost/34689] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41841 2023-11-27 04:59:36,326 INFO [Listener at localhost/34689] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-11-27 04:59:36,327 DEBUG [Listener at localhost/34689] mob.MobFileCache(121): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-11-27 04:59:36,328 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:36,330 INFO [Listener at localhost/34689] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:36,332 INFO [Listener at localhost/34689] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41841 connecting to ZooKeeper ensemble=127.0.0.1:50029 2023-11-27 04:59:36,336 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:418410x0, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-11-27 04:59:36,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41841-0x1002d35880e0002 connected 2023-11-27 04:59:36,337 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-11-27 04:59:36,338 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-11-27 04:59:36,338 DEBUG [Listener at localhost/34689] zookeeper.ZKUtil(165): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-11-27 04:59:36,339 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41841 2023-11-27 04:59:36,339 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41841 2023-11-27 04:59:36,339 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41841 2023-11-27 04:59:36,340 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41841 2023-11-27 04:59:36,340 DEBUG [Listener at localhost/34689] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41841 2023-11-27 04:59:36,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:36,352 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-11-27 04:59:36,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(163): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:36,372 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-11-27 04:59:36,372 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-11-27 04:59:36,372 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-11-27 04:59:36,373 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:36,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(163): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-11-27 04:59:36,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33323,1701061175121 from backup master directory 2023-11-27 04:59:36,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(163): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-11-27 04:59:36,378 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:36,379 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-11-27 04:59:36,379 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-11-27 04:59:36,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:36,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-11-27 04:59:36,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-11-27 04:59:36,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/hbase.id with ID: e6350d7e-c892-48f0-9fae-0b264f2e1921 2023-11-27 04:59:36,509 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-11-27 04:59:36,523 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:36,566 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3937d8f7 to 127.0.0.1:50029 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-11-27 04:59:36,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@647b52a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-11-27 04:59:36,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:36,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-11-27 04:59:36,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-11-27 04:59:36,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-11-27 04:59:36,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-11-27 04:59:36,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-11-27 04:59:36,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-11-27 04:59:36,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store-tmp 2023-11-27 04:59:36,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:36,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-11-27 04:59:36,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 04:59:36,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 04:59:36,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-11-27 04:59:36,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 04:59:36,716 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 04:59:36,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-11-27 04:59:36,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/WALs/jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:36,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33323%2C1701061175121, suffix=, logDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/WALs/jenkins-hbase4.apache.org,33323,1701061175121, archiveDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/oldWALs, maxLogs=10 2023-11-27 04:59:36,801 DEBUG [RS-EventLoopGroup-4-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK] 2023-11-27 04:59:36,801 DEBUG [RS-EventLoopGroup-4-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK] 2023-11-27 04:59:36,811 DEBUG [RS-EventLoopGroup-4-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:407) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-11-27 04:59:36,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/WALs/jenkins-hbase4.apache.org,33323,1701061175121/jenkins-hbase4.apache.org%2C33323%2C1701061175121.1701061176749 2023-11-27 04:59:36,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK], DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK]] 2023-11-27 04:59:36,879 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-11-27 04:59:36,879 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:36,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-11-27 04:59:36,884 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-11-27 04:59:36,941 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-11-27 04:59:36,948 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-11-27 04:59:36,971 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-11-27 04:59:36,982 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:36,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-11-27 04:59:36,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-11-27 04:59:37,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-11-27 04:59:37,013 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:37,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=74132643, jitterRate=0.1046624630689621}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 04:59:37,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-11-27 04:59:37,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-11-27 04:59:37,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-11-27 04:59:37,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(561): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-11-27 04:59:37,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-11-27 04:59:37,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(581): Recovered RegionProcedureStore lease in 1 msec 2023-11-27 04:59:37,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(595): Loaded RegionProcedureStore in 31 msec 2023-11-27 04:59:37,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-11-27 04:59:37,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-11-27 04:59:37,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-11-27 04:59:37,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-11-27 04:59:37,133 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-11-27 04:59:37,134 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-11-27 04:59:37,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-11-27 04:59:37,143 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-11-27 04:59:37,147 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:37,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-11-27 04:59:37,148 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-11-27 04:59:37,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-11-27 04:59:37,164 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-11-27 04:59:37,164 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-11-27 04:59:37,164 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-11-27 04:59:37,164 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:37,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33323,1701061175121, sessionid=0x1002d35880e0000, setting cluster-up flag (Was=false) 2023-11-27 04:59:37,182 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:37,189 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-11-27 04:59:37,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:37,196 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:37,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-11-27 04:59:37,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:37,207 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(304): Couldn't delete working snapshot directory: hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.hbase-snapshot/.tmp 2023-11-27 04:59:37,245 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(954): ClusterId : e6350d7e-c892-48f0-9fae-0b264f2e1921 2023-11-27 04:59:37,246 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(954): ClusterId : e6350d7e-c892-48f0-9fae-0b264f2e1921 2023-11-27 04:59:37,249 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-11-27 04:59:37,249 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-11-27 04:59:37,255 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-11-27 04:59:37,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870911. 2023-11-27 04:59:37,256 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-11-27 04:59:37,255 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-11-27 04:59:37,256 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-11-27 04:59:37,260 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-11-27 04:59:37,260 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-11-27 04:59:37,263 DEBUG [RS:1;jenkins-hbase4:41841] zookeeper.ReadOnlyZKClient(139): Connect 0x7b053c7c to 127.0.0.1:50029 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-11-27 04:59:37,263 DEBUG [RS:0;jenkins-hbase4:41853] zookeeper.ReadOnlyZKClient(139): Connect 0x3c445bdd to 127.0.0.1:50029 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-11-27 04:59:37,270 DEBUG [RS:1;jenkins-hbase4:41841] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75c2dd6c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-11-27 04:59:37,270 DEBUG [RS:0;jenkins-hbase4:41853] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3283b225, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-11-27 04:59:37,271 DEBUG [RS:1;jenkins-hbase4:41841] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71fd570c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-11-27 04:59:37,271 DEBUG [RS:0;jenkins-hbase4:41853] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36bc30ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-11-27 04:59:37,297 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41841 2023-11-27 04:59:37,299 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41853 2023-11-27 04:59:37,302 INFO [RS:1;jenkins-hbase4:41841] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-11-27 04:59:37,302 INFO [RS:0;jenkins-hbase4:41853] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-11-27 04:59:37,303 INFO [RS:1;jenkins-hbase4:41841] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-11-27 04:59:37,303 INFO [RS:0;jenkins-hbase4:41853] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-11-27 04:59:37,303 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1025): About to register with Master. 2023-11-27 04:59:37,303 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1025): About to register with Master. 2023-11-27 04:59:37,306 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(2814): reportForDuty to master=jenkins-hbase4.apache.org,33323,1701061175121 with isa=jenkins-hbase4.apache.org/172.31.14.131:41841, startcode=1701061176322 2023-11-27 04:59:37,306 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(2814): reportForDuty to master=jenkins-hbase4.apache.org,33323,1701061175121 with isa=jenkins-hbase4.apache.org/172.31.14.131:41853, startcode=1701061176279 2023-11-27 04:59:37,326 DEBUG [RS:1;jenkins-hbase4:41841] ipc.RpcConnection(122): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-11-27 04:59:37,326 DEBUG [RS:0;jenkins-hbase4:41853] ipc.RpcConnection(122): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-11-27 04:59:37,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1028): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-11-27 04:59:37,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-11-27 04:59:37,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-11-27 04:59:37,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-11-27 04:59:37,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-11-27 04:59:37,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-11-27 04:59:37,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-11-27 04:59:37,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,347 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1701061207347 2023-11-27 04:59:37,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-11-27 04:59:37,363 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-11-27 04:59:37,365 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-11-27 04:59:37,366 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-11-27 04:59:37,372 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:37,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-11-27 04:59:37,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-11-27 04:59:37,374 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-11-27 04:59:37,374 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-11-27 04:59:37,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-11-27 04:59:37,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-11-27 04:59:37,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-11-27 04:59:37,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-11-27 04:59:37,382 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-11-27 04:59:37,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1701061177383,5,FailOnTimeoutGroup] 2023-11-27 04:59:37,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1701061177383,5,FailOnTimeoutGroup] 2023-11-27 04:59:37,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-11-27 04:59:37,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,386 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48919, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-11-27 04:59:37,386 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42035, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-11-27 04:59:37,398 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33323] master.ServerManager(388): Registering regionserver=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,415 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:37,415 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33323] master.ServerManager(388): Registering regionserver=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:37,416 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:37,417 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0 2023-11-27 04:59:37,421 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1598): Config from master: hbase.rootdir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0 2023-11-27 04:59:37,422 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1598): Config from master: hbase.rootdir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0 2023-11-27 04:59:37,423 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1598): Config from master: fs.defaultFS=hdfs://localhost:41015 2023-11-27 04:59:37,423 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1598): Config from master: fs.defaultFS=hdfs://localhost:41015 2023-11-27 04:59:37,423 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1598): Config from master: hbase.master.info.port=-1 2023-11-27 04:59:37,423 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1598): Config from master: hbase.master.info.port=-1 2023-11-27 04:59:37,433 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-11-27 04:59:37,435 DEBUG [RS:0;jenkins-hbase4:41853] zookeeper.ZKUtil(163): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:37,435 DEBUG [RS:1;jenkins-hbase4:41841] zookeeper.ZKUtil(163): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,435 WARN [RS:0;jenkins-hbase4:41853] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-11-27 04:59:37,435 WARN [RS:1;jenkins-hbase4:41841] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-11-27 04:59:37,437 INFO [RS:1;jenkins-hbase4:41841] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-11-27 04:59:37,438 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1951): logDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,437 INFO [RS:0;jenkins-hbase4:41853] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-11-27 04:59:37,438 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1951): logDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:37,439 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41841,1701061176322] 2023-11-27 04:59:37,440 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41853,1701061176279] 2023-11-27 04:59:37,443 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:37,446 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-11-27 04:59:37,448 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info 2023-11-27 04:59:37,449 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-11-27 04:59:37,451 DEBUG [RS:1;jenkins-hbase4:41841] zookeeper.ZKUtil(163): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,451 DEBUG [RS:0;jenkins-hbase4:41853] zookeeper.ZKUtil(163): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,451 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:37,452 DEBUG [RS:1;jenkins-hbase4:41841] zookeeper.ZKUtil(163): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:37,452 DEBUG [RS:0;jenkins-hbase4:41853] zookeeper.ZKUtil(163): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:37,452 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-11-27 04:59:37,458 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/rep_barrier 2023-11-27 04:59:37,458 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-11-27 04:59:37,459 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:37,460 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-11-27 04:59:37,462 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table 2023-11-27 04:59:37,462 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-11-27 04:59:37,462 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-11-27 04:59:37,463 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-11-27 04:59:37,464 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:37,465 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740 2023-11-27 04:59:37,466 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740 2023-11-27 04:59:37,470 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-11-27 04:59:37,472 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-11-27 04:59:37,472 INFO [RS:0;jenkins-hbase4:41853] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-11-27 04:59:37,472 INFO [RS:1;jenkins-hbase4:41841] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-11-27 04:59:37,476 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:37,477 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=59316423, jitterRate=-0.11611641943454742}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-11-27 04:59:37,477 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-11-27 04:59:37,477 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-11-27 04:59:37,477 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-11-27 04:59:37,477 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-11-27 04:59:37,477 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-11-27 04:59:37,477 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-11-27 04:59:37,478 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-11-27 04:59:37,478 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-11-27 04:59:37,483 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-11-27 04:59:37,483 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-11-27 04:59:37,492 INFO [PEWorker-1] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-11-27 04:59:37,493 INFO [RS:1;jenkins-hbase4:41841] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-11-27 04:59:37,493 INFO [RS:0;jenkins-hbase4:41853] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-11-27 04:59:37,496 INFO [RS:1;jenkins-hbase4:41841] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-11-27 04:59:37,496 INFO [RS:0;jenkins-hbase4:41853] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-11-27 04:59:37,497 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,497 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,503 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer$CompactionChecker(1840): CompactionChecker runs every PT1S 2023-11-27 04:59:37,503 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer$CompactionChecker(1840): CompactionChecker runs every PT1S 2023-11-27 04:59:37,505 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-11-27 04:59:37,507 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-11-27 04:59:37,510 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,510 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,511 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,511 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-11-27 04:59:37,512 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-11-27 04:59:37,512 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:1;jenkins-hbase4:41841] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,512 DEBUG [RS:0;jenkins-hbase4:41853] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-11-27 04:59:37,514 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,514 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,514 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,514 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,514 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,515 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,515 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,515 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,531 INFO [RS:0;jenkins-hbase4:41853] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-11-27 04:59:37,531 INFO [RS:1;jenkins-hbase4:41841] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-11-27 04:59:37,534 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41841,1701061176322-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,534 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41853,1701061176279-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,557 INFO [RS:1;jenkins-hbase4:41841] regionserver.Replication(203): jenkins-hbase4.apache.org,41841,1701061176322 started 2023-11-27 04:59:37,557 INFO [RS:0;jenkins-hbase4:41853] regionserver.Replication(203): jenkins-hbase4.apache.org,41853,1701061176279 started 2023-11-27 04:59:37,557 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1640): Serving as jenkins-hbase4.apache.org,41841,1701061176322, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41841, sessionid=0x1002d35880e0002 2023-11-27 04:59:37,557 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1640): Serving as jenkins-hbase4.apache.org,41853,1701061176279, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41853, sessionid=0x1002d35880e0001 2023-11-27 04:59:37,557 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-11-27 04:59:37,557 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-11-27 04:59:37,557 DEBUG [RS:1;jenkins-hbase4:41841] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,557 DEBUG [RS:0;jenkins-hbase4:41853] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:37,558 DEBUG [RS:1;jenkins-hbase4:41841] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41841,1701061176322' 2023-11-27 04:59:37,558 DEBUG [RS:0;jenkins-hbase4:41853] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41853,1701061176279' 2023-11-27 04:59:37,559 DEBUG [RS:0;jenkins-hbase4:41853] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-11-27 04:59:37,558 DEBUG [RS:1;jenkins-hbase4:41841] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-11-27 04:59:37,559 DEBUG [RS:0;jenkins-hbase4:41853] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-11-27 04:59:37,559 DEBUG [RS:1;jenkins-hbase4:41841] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-11-27 04:59:37,560 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-11-27 04:59:37,560 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-11-27 04:59:37,560 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-11-27 04:59:37,560 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-11-27 04:59:37,561 DEBUG [RS:0;jenkins-hbase4:41853] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:37,561 DEBUG [RS:1;jenkins-hbase4:41841] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,561 DEBUG [RS:0;jenkins-hbase4:41853] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41853,1701061176279' 2023-11-27 04:59:37,561 DEBUG [RS:0;jenkins-hbase4:41853] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-11-27 04:59:37,561 DEBUG [RS:1;jenkins-hbase4:41841] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41841,1701061176322' 2023-11-27 04:59:37,561 DEBUG [RS:1;jenkins-hbase4:41841] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-11-27 04:59:37,562 DEBUG [RS:0;jenkins-hbase4:41853] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-11-27 04:59:37,562 DEBUG [RS:1;jenkins-hbase4:41841] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-11-27 04:59:37,563 DEBUG [RS:0;jenkins-hbase4:41853] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-11-27 04:59:37,563 INFO [RS:0;jenkins-hbase4:41853] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-11-27 04:59:37,563 DEBUG [RS:1;jenkins-hbase4:41841] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-11-27 04:59:37,563 INFO [RS:1;jenkins-hbase4:41841] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-11-27 04:59:37,566 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=1800000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,566 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=1800000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,567 DEBUG [RS:0;jenkins-hbase4:41853] zookeeper.ZKUtil(399): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-11-27 04:59:37,567 DEBUG [RS:1;jenkins-hbase4:41841] zookeeper.ZKUtil(399): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-11-27 04:59:37,567 INFO [RS:0;jenkins-hbase4:41853] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-11-27 04:59:37,567 INFO [RS:1;jenkins-hbase4:41841] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-11-27 04:59:37,568 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,568 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,568 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,568 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:37,659 DEBUG [jenkins-hbase4:33323] assignment.AssignmentManager(2186): Processing assignQueue; systemServersCount=2, allServersCount=2 2023-11-27 04:59:37,662 DEBUG [jenkins-hbase4:33323] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 04:59:37,668 DEBUG [jenkins-hbase4:33323] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 04:59:37,668 DEBUG [jenkins-hbase4:33323] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 04:59:37,668 DEBUG [jenkins-hbase4:33323] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-11-27 04:59:37,671 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41841,1701061176322, state=OPENING 2023-11-27 04:59:37,678 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-11-27 04:59:37,679 INFO [RS:0;jenkins-hbase4:41853] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41853%2C1701061176279, suffix=, logDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41853,1701061176279, archiveDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs, maxLogs=32 2023-11-27 04:59:37,679 INFO [RS:1;jenkins-hbase4:41841] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41841%2C1701061176322, suffix=, logDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41841,1701061176322, archiveDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs, maxLogs=32 2023-11-27 04:59:37,680 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:37,681 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-11-27 04:59:37,685 INFO [PEWorker-3] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 04:59:37,708 DEBUG [RS-EventLoopGroup-4-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK] 2023-11-27 04:59:37,732 DEBUG [RS-EventLoopGroup-4-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK] 2023-11-27 04:59:37,733 DEBUG [RS-EventLoopGroup-4-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK] 2023-11-27 04:59:37,733 DEBUG [RS-EventLoopGroup-4-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK] 2023-11-27 04:59:37,740 INFO [RS:0;jenkins-hbase4:41853] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41853,1701061176279/jenkins-hbase4.apache.org%2C41853%2C1701061176279.1701061177683 2023-11-27 04:59:37,740 INFO [RS:1;jenkins-hbase4:41841] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41841,1701061176322/jenkins-hbase4.apache.org%2C41841%2C1701061176322.1701061177683 2023-11-27 04:59:37,743 DEBUG [RS:0;jenkins-hbase4:41853] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK], DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK]] 2023-11-27 04:59:37,743 DEBUG [RS:1;jenkins-hbase4:41841] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK], DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK]] 2023-11-27 04:59:37,869 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:37,871 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(122): Using SIMPLE authentication for service=AdminService, sasl=false 2023-11-27 04:59:37,874 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50772, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-11-27 04:59:37,885 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-11-27 04:59:37,885 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-11-27 04:59:37,888 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41841%2C1701061176322.meta, suffix=.meta, logDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41841,1701061176322, archiveDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs, maxLogs=32 2023-11-27 04:59:37,905 DEBUG [RS-EventLoopGroup-4-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK] 2023-11-27 04:59:37,905 DEBUG [RS-EventLoopGroup-4-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK] 2023-11-27 04:59:37,912 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41841,1701061176322/jenkins-hbase4.apache.org%2C41841%2C1701061176322.meta.1701061177889.meta 2023-11-27 04:59:37,912 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK], DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK]] 2023-11-27 04:59:37,912 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-11-27 04:59:37,914 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-11-27 04:59:37,930 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-11-27 04:59:37,935 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-11-27 04:59:37,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-11-27 04:59:37,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:37,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-11-27 04:59:37,940 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-11-27 04:59:37,943 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-11-27 04:59:37,945 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info 2023-11-27 04:59:37,945 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info 2023-11-27 04:59:37,945 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-11-27 04:59:37,947 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:37,947 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-11-27 04:59:37,949 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/rep_barrier 2023-11-27 04:59:37,949 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/rep_barrier 2023-11-27 04:59:37,950 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-11-27 04:59:37,950 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:37,951 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-11-27 04:59:37,952 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table 2023-11-27 04:59:37,952 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table 2023-11-27 04:59:37,952 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-11-27 04:59:37,953 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:37,955 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740 2023-11-27 04:59:37,958 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740 2023-11-27 04:59:37,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-11-27 04:59:37,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-11-27 04:59:37,964 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=75071568, jitterRate=0.11865353584289551}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-11-27 04:59:37,965 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-11-27 04:59:37,975 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1701061177860 2023-11-27 04:59:37,991 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for hbase:meta,,1.1588230740 2023-11-27 04:59:37,991 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-11-27 04:59:37,992 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41841,1701061176322, state=OPEN 2023-11-27 04:59:37,995 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-11-27 04:59:37,995 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-11-27 04:59:37,999 INFO [PEWorker-5] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=3, resume processing ppid=2 2023-11-27 04:59:37,999 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41841,1701061176322 in 310 msec 2023-11-27 04:59:38,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=2, resume processing ppid=1 2023-11-27 04:59:38,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 507 msec 2023-11-27 04:59:38,009 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 744 msec 2023-11-27 04:59:38,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1701061178009, completionTime=-1 2023-11-27 04:59:38,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(808): Finished waiting on RegionServer count=2; waited=0ms, expected min=2 server(s), max=2 server(s), master is running 2023-11-27 04:59:38,009 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1527): Joining cluster... 2023-11-27 04:59:38,071 DEBUG [hconnection-0x28a52f56-shared-pool-0] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 04:59:38,076 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 04:59:38,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1539): Number of RegionServers=2 2023-11-27 04:59:38,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1701061238091 2023-11-27 04:59:38,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1701061298091 2023-11-27 04:59:38,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1546): Joined the cluster in 81 msec 2023-11-27 04:59:38,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33323,1701061175121-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:38,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33323,1701061175121-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:38,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33323,1701061175121-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:38,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33323, period=300000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:38,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:38,120 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-11-27 04:59:38,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-11-27 04:59:38,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:38,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1028): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-11-27 04:59:38,141 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-11-27 04:59:38,144 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-11-27 04:59:38,167 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,169 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1 empty. 2023-11-27 04:59:38,170 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,170 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-11-27 04:59:38,197 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:38,199 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 708094d2c6013f8353947ca009f33ef1, NAME => 'hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp 2023-11-27 04:59:38,214 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:38,214 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 708094d2c6013f8353947ca009f33ef1, disabling compactions & flushes 2023-11-27 04:59:38,214 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 04:59:38,214 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 04:59:38,214 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. after waiting 0 ms 2023-11-27 04:59:38,214 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 04:59:38,214 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 04:59:38,215 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 708094d2c6013f8353947ca009f33ef1: 2023-11-27 04:59:38,218 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-11-27 04:59:38,233 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1701061178221"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061178221"}]},"ts":"1701061178221"} 2023-11-27 04:59:38,258 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-11-27 04:59:38,260 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-11-27 04:59:38,264 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061178260"}]},"ts":"1701061178260"} 2023-11-27 04:59:38,272 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-11-27 04:59:38,277 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 04:59:38,279 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 04:59:38,279 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 04:59:38,279 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-11-27 04:59:38,282 INFO [PEWorker-3] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, ASSIGN}] 2023-11-27 04:59:38,284 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, ASSIGN 2023-11-27 04:59:38,286 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41841,1701061176322; forceNewPlan=false, retain=false 2023-11-27 04:59:38,439 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 04:59:38,440 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=708094d2c6013f8353947ca009f33ef1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:38,440 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1701061178439"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061178439"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061178439"}]},"ts":"1701061178439"} 2023-11-27 04:59:38,444 INFO [PEWorker-5] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 708094d2c6013f8353947ca009f33ef1, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 04:59:38,598 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:38,605 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 04:59:38,606 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 708094d2c6013f8353947ca009f33ef1, NAME => 'hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.', STARTKEY => '', ENDKEY => ''} 2023-11-27 04:59:38,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:38,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,607 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,609 INFO [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,612 DEBUG [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info 2023-11-27 04:59:38,612 DEBUG [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info 2023-11-27 04:59:38,612 INFO [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 708094d2c6013f8353947ca009f33ef1 columnFamilyName info 2023-11-27 04:59:38,613 INFO [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] regionserver.HStore(310): Store=708094d2c6013f8353947ca009f33ef1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:38,615 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,620 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 708094d2c6013f8353947ca009f33ef1 2023-11-27 04:59:38,624 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:38,625 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 708094d2c6013f8353947ca009f33ef1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=60424403, jitterRate=-0.0996062308549881}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 04:59:38,625 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 708094d2c6013f8353947ca009f33ef1: 2023-11-27 04:59:38,627 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1., pid=6, masterSystemTime=1701061178598 2023-11-27 04:59:38,632 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 04:59:38,632 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 04:59:38,633 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=708094d2c6013f8353947ca009f33ef1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:38,633 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1701061178632"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1701061178632"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1701061178632"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1701061178632"}]},"ts":"1701061178632"} 2023-11-27 04:59:38,640 INFO [PEWorker-2] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=6, resume processing ppid=5 2023-11-27 04:59:38,640 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 708094d2c6013f8353947ca009f33ef1, server=jenkins-hbase4.apache.org,41841,1701061176322 in 193 msec 2023-11-27 04:59:38,644 INFO [PEWorker-3] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=5, resume processing ppid=4 2023-11-27 04:59:38,645 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, ASSIGN in 358 msec 2023-11-27 04:59:38,646 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-11-27 04:59:38,646 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061178646"}]},"ts":"1701061178646"} 2023-11-27 04:59:38,649 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-11-27 04:59:38,652 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-11-27 04:59:38,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 521 msec 2023-11-27 04:59:38,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-11-27 04:59:38,743 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-11-27 04:59:38,743 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 04:59:38,776 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1028): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-11-27 04:59:38,796 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-11-27 04:59:38,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 32 msec 2023-11-27 04:59:38,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1028): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-11-27 04:59:38,820 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-11-27 04:59:38,825 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 14 msec 2023-11-27 04:59:38,838 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-11-27 04:59:38,841 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-11-27 04:59:38,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.461sec 2023-11-27 04:59:38,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-11-27 04:59:38,844 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:38,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1028): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-11-27 04:59:38,845 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-11-27 04:59:38,847 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-11-27 04:59:38,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-11-27 04:59:38,849 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-11-27 04:59:38,851 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:38,852 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f empty. 2023-11-27 04:59:38,854 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:38,854 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-11-27 04:59:38,855 DEBUG [Listener at localhost/34689] zookeeper.ReadOnlyZKClient(139): Connect 0x028b264a to 127.0.0.1:50029 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-11-27 04:59:38,855 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-11-27 04:59:38,855 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-11-27 04:59:38,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:38,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-11-27 04:59:38,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-11-27 04:59:38,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-11-27 04:59:38,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33323,1701061175121-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-11-27 04:59:38,862 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33323,1701061175121-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-11-27 04:59:38,863 DEBUG [Listener at localhost/34689] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3a303ece, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-11-27 04:59:38,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-11-27 04:59:38,883 DEBUG [hconnection-0x67fa6ab4-shared-pool-0] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 04:59:38,896 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50782, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 04:59:38,906 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 04:59:38,907 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [30,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:38,910 WARN [Listener at localhost/34689] client.ConnectionImplementation(764): Table hbase:quota does not exist 2023-11-27 04:59:39,012 WARN [Listener at localhost/34689] client.ConnectionImplementation(764): Table hbase:quota does not exist 2023-11-27 04:59:39,114 WARN [Listener at localhost/34689] client.ConnectionImplementation(764): Table hbase:quota does not exist 2023-11-27 04:59:39,216 WARN [Listener at localhost/34689] client.ConnectionImplementation(764): Table hbase:quota does not exist 2023-11-27 04:59:39,281 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:39,282 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => be5ef4f3dfb2c43b447798061e19f02f, NAME => 'hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp 2023-11-27 04:59:39,301 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:39,301 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing be5ef4f3dfb2c43b447798061e19f02f, disabling compactions & flushes 2023-11-27 04:59:39,301 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 04:59:39,301 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 04:59:39,301 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. after waiting 0 ms 2023-11-27 04:59:39,301 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 04:59:39,301 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 04:59:39,301 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 04:59:39,305 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-11-27 04:59:39,307 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1701061179307"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061179307"}]},"ts":"1701061179307"} 2023-11-27 04:59:39,309 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-11-27 04:59:39,311 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-11-27 04:59:39,311 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061179311"}]},"ts":"1701061179311"} 2023-11-27 04:59:39,314 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-11-27 04:59:39,319 DEBUG [Listener at localhost/34689] client.ConnectionImplementation(716): Table hbase:quota not enabled 2023-11-27 04:59:39,319 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 04:59:39,321 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 04:59:39,321 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 04:59:39,321 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-11-27 04:59:39,321 INFO [PEWorker-2] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=be5ef4f3dfb2c43b447798061e19f02f, ASSIGN}] 2023-11-27 04:59:39,323 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=be5ef4f3dfb2c43b447798061e19f02f, ASSIGN 2023-11-27 04:59:39,324 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=be5ef4f3dfb2c43b447798061e19f02f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41841,1701061176322; forceNewPlan=false, retain=false 2023-11-27 04:59:39,421 DEBUG [Listener at localhost/34689] client.ConnectionImplementation(716): Table hbase:quota not enabled 2023-11-27 04:59:39,474 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 04:59:39,475 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=be5ef4f3dfb2c43b447798061e19f02f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:39,475 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1701061179475"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061179475"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061179475"}]},"ts":"1701061179475"} 2023-11-27 04:59:39,478 INFO [PEWorker-4] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure be5ef4f3dfb2c43b447798061e19f02f, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 04:59:39,523 DEBUG [Listener at localhost/34689] client.ConnectionImplementation(716): Table hbase:quota not enabled 2023-11-27 04:59:39,625 DEBUG [Listener at localhost/34689] client.ConnectionImplementation(716): Table hbase:quota not enabled 2023-11-27 04:59:39,631 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:39,637 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 04:59:39,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be5ef4f3dfb2c43b447798061e19f02f, NAME => 'hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.', STARTKEY => '', ENDKEY => ''} 2023-11-27 04:59:39,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:39,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,639 INFO [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,641 DEBUG [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q 2023-11-27 04:59:39,641 DEBUG [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q 2023-11-27 04:59:39,642 INFO [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be5ef4f3dfb2c43b447798061e19f02f columnFamilyName q 2023-11-27 04:59:39,642 INFO [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] regionserver.HStore(310): Store=be5ef4f3dfb2c43b447798061e19f02f/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:39,643 INFO [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,644 DEBUG [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u 2023-11-27 04:59:39,644 DEBUG [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u 2023-11-27 04:59:39,645 INFO [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be5ef4f3dfb2c43b447798061e19f02f columnFamilyName u 2023-11-27 04:59:39,646 INFO [StoreOpener-be5ef4f3dfb2c43b447798061e19f02f-1] regionserver.HStore(310): Store=be5ef4f3dfb2c43b447798061e19f02f/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:39,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,649 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,651 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-11-27 04:59:39,654 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 04:59:39,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:39,658 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be5ef4f3dfb2c43b447798061e19f02f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=58801861, jitterRate=-0.12378399074077606}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-11-27 04:59:39,658 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 04:59:39,659 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., pid=11, masterSystemTime=1701061179631 2023-11-27 04:59:39,661 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 04:59:39,661 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 04:59:39,662 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=be5ef4f3dfb2c43b447798061e19f02f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:39,662 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1701061179662"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1701061179662"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1701061179662"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1701061179662"}]},"ts":"1701061179662"} 2023-11-27 04:59:39,668 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=11, resume processing ppid=10 2023-11-27 04:59:39,668 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure be5ef4f3dfb2c43b447798061e19f02f, server=jenkins-hbase4.apache.org,41841,1701061176322 in 187 msec 2023-11-27 04:59:39,671 INFO [PEWorker-2] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=10, resume processing ppid=9 2023-11-27 04:59:39,671 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=be5ef4f3dfb2c43b447798061e19f02f, ASSIGN in 347 msec 2023-11-27 04:59:39,672 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-11-27 04:59:39,672 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061179672"}]},"ts":"1701061179672"} 2023-11-27 04:59:39,674 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-11-27 04:59:39,678 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-11-27 04:59:39,680 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=9, state=SUCCESS; CreateTableProcedure table=hbase:quota in 834 msec 2023-11-27 04:59:39,738 DEBUG [Listener at localhost/34689] ipc.RpcConnection(122): Using SIMPLE authentication for service=MasterService, sasl=false 2023-11-27 04:59:39,741 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54212, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-11-27 04:59:39,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestQuotaAdmin0', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:39,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestQuotaAdmin0 2023-11-27 04:59:39,770 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestQuotaAdmin0 execute state=CREATE_TABLE_PRE_OPERATION 2023-11-27 04:59:39,772 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestQuotaAdmin0 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-11-27 04:59:39,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestQuotaAdmin0" procId is: 12 2023-11-27 04:59:39,774 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:39,775 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d empty. 2023-11-27 04:59:39,776 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:39,776 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived TestQuotaAdmin0 regions 2023-11-27 04:59:39,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-11-27 04:59:39,793 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:39,795 INFO [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(7675): creating {ENCODED => 44ac23936652c71f70e8746cf757ab6d, NAME => 'TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestQuotaAdmin0', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp 2023-11-27 04:59:39,809 DEBUG [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(866): Instantiated TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:39,810 DEBUG [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(1604): Closing 44ac23936652c71f70e8746cf757ab6d, disabling compactions & flushes 2023-11-27 04:59:39,810 INFO [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(1626): Closing region TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 04:59:39,810 DEBUG [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 04:59:39,810 DEBUG [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(1714): Acquired close lock on TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. after waiting 0 ms 2023-11-27 04:59:39,810 DEBUG [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(1724): Updates disabled for region TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 04:59:39,810 INFO [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(1838): Closed TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 04:59:39,810 DEBUG [RegionOpenAndInit-TestQuotaAdmin0-pool-0] regionserver.HRegion(1558): Region close journal for 44ac23936652c71f70e8746cf757ab6d: 2023-11-27 04:59:39,814 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestQuotaAdmin0 execute state=CREATE_TABLE_ADD_TO_META 2023-11-27 04:59:39,815 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061179815"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061179815"}]},"ts":"1701061179815"} 2023-11-27 04:59:39,817 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-11-27 04:59:39,819 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestQuotaAdmin0 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-11-27 04:59:39,819 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin0","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061179819"}]},"ts":"1701061179819"} 2023-11-27 04:59:39,821 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin0, state=ENABLING in hbase:meta 2023-11-27 04:59:39,826 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 04:59:39,827 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 04:59:39,827 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 04:59:39,827 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-11-27 04:59:39,827 INFO [PEWorker-4] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestQuotaAdmin0, region=44ac23936652c71f70e8746cf757ab6d, ASSIGN}] 2023-11-27 04:59:39,829 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestQuotaAdmin0, region=44ac23936652c71f70e8746cf757ab6d, ASSIGN 2023-11-27 04:59:39,830 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestQuotaAdmin0, region=44ac23936652c71f70e8746cf757ab6d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41841,1701061176322; forceNewPlan=false, retain=false 2023-11-27 04:59:39,981 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 04:59:39,982 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=44ac23936652c71f70e8746cf757ab6d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:39,982 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061179981"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061179981"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061179981"}]},"ts":"1701061179981"} 2023-11-27 04:59:39,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 44ac23936652c71f70e8746cf757ab6d, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 04:59:40,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-11-27 04:59:40,137 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:40,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 04:59:40,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 44ac23936652c71f70e8746cf757ab6d, NAME => 'TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.', STARTKEY => '', ENDKEY => ''} 2023-11-27 04:59:40,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestQuotaAdmin0 44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:40,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:40,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:40,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:40,146 INFO [StoreOpener-44ac23936652c71f70e8746cf757ab6d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:40,148 DEBUG [StoreOpener-44ac23936652c71f70e8746cf757ab6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf 2023-11-27 04:59:40,148 DEBUG [StoreOpener-44ac23936652c71f70e8746cf757ab6d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf 2023-11-27 04:59:40,149 INFO [StoreOpener-44ac23936652c71f70e8746cf757ab6d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 44ac23936652c71f70e8746cf757ab6d columnFamilyName cf 2023-11-27 04:59:40,150 INFO [StoreOpener-44ac23936652c71f70e8746cf757ab6d-1] regionserver.HStore(310): Store=44ac23936652c71f70e8746cf757ab6d/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:40,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:40,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:40,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 44ac23936652c71f70e8746cf757ab6d 2023-11-27 04:59:40,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:40,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 44ac23936652c71f70e8746cf757ab6d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=70297762, jitterRate=0.0475182831287384}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 04:59:40,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 44ac23936652c71f70e8746cf757ab6d: 2023-11-27 04:59:40,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., pid=14, masterSystemTime=1701061180137 2023-11-27 04:59:40,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 04:59:40,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 04:59:40,166 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=44ac23936652c71f70e8746cf757ab6d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:40,166 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061180166"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1701061180166"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1701061180166"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1701061180166"}]},"ts":"1701061180166"} 2023-11-27 04:59:40,172 INFO [PEWorker-3] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=14, resume processing ppid=13 2023-11-27 04:59:40,172 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 44ac23936652c71f70e8746cf757ab6d, server=jenkins-hbase4.apache.org,41841,1701061176322 in 184 msec 2023-11-27 04:59:40,175 INFO [PEWorker-4] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=13, resume processing ppid=12 2023-11-27 04:59:40,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=TestQuotaAdmin0, region=44ac23936652c71f70e8746cf757ab6d, ASSIGN in 345 msec 2023-11-27 04:59:40,177 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestQuotaAdmin0 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-11-27 04:59:40,177 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin0","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061180177"}]},"ts":"1701061180177"} 2023-11-27 04:59:40,179 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin0, state=ENABLED in hbase:meta 2023-11-27 04:59:40,182 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestQuotaAdmin0 execute state=CREATE_TABLE_POST_OPERATION 2023-11-27 04:59:40,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=12, state=SUCCESS; CreateTableProcedure table=TestQuotaAdmin0 in 416 msec 2023-11-27 04:59:40,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-11-27 04:59:40,540 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestQuotaAdmin0, procId: 12 completed 2023-11-27 04:59:40,541 DEBUG [Listener at localhost/34689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table TestQuotaAdmin0 get assigned. Timeout = 60000ms 2023-11-27 04:59:40,541 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:40,546 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3484): All regions for table TestQuotaAdmin0 assigned to meta. Checking AM states. 2023-11-27 04:59:40,546 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:40,547 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3504): All regions for table TestQuotaAdmin0 assigned. 2023-11-27 04:59:40,547 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [30,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:40,560 DEBUG [regionserver/jenkins-hbase4:0.Chore.1] ipc.RpcConnection(122): Using SIMPLE authentication for service=MasterService, sasl=false 2023-11-27 04:59:40,563 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54222, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=MasterService 2023-11-27 04:59:40,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestQuotaAdmin1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:40,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestQuotaAdmin1 2023-11-27 04:59:40,567 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestQuotaAdmin1 execute state=CREATE_TABLE_PRE_OPERATION 2023-11-27 04:59:40,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestQuotaAdmin1" procId is: 15 2023-11-27 04:59:40,569 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestQuotaAdmin1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-11-27 04:59:40,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-11-27 04:59:40,577 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,578 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd empty. 2023-11-27 04:59:40,578 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,579 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived TestQuotaAdmin1 regions 2023-11-27 04:59:40,600 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:40,602 INFO [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2d48041eaba6bc404a22a735fb3000dd, NAME => 'TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestQuotaAdmin1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp 2023-11-27 04:59:40,618 DEBUG [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(866): Instantiated TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:40,618 DEBUG [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(1604): Closing 2d48041eaba6bc404a22a735fb3000dd, disabling compactions & flushes 2023-11-27 04:59:40,618 INFO [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(1626): Closing region TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 04:59:40,618 DEBUG [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 04:59:40,618 DEBUG [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(1714): Acquired close lock on TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. after waiting 0 ms 2023-11-27 04:59:40,619 DEBUG [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(1724): Updates disabled for region TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 04:59:40,619 INFO [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(1838): Closed TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 04:59:40,619 DEBUG [RegionOpenAndInit-TestQuotaAdmin1-pool-0] regionserver.HRegion(1558): Region close journal for 2d48041eaba6bc404a22a735fb3000dd: 2023-11-27 04:59:40,623 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestQuotaAdmin1 execute state=CREATE_TABLE_ADD_TO_META 2023-11-27 04:59:40,625 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061180624"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061180624"}]},"ts":"1701061180624"} 2023-11-27 04:59:40,627 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-11-27 04:59:40,628 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestQuotaAdmin1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-11-27 04:59:40,629 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061180628"}]},"ts":"1701061180628"} 2023-11-27 04:59:40,630 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin1, state=ENABLING in hbase:meta 2023-11-27 04:59:40,636 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 04:59:40,637 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 04:59:40,637 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 04:59:40,637 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-11-27 04:59:40,637 INFO [PEWorker-1] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, ASSIGN}] 2023-11-27 04:59:40,639 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, ASSIGN 2023-11-27 04:59:40,640 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41841,1701061176322; forceNewPlan=false, retain=false 2023-11-27 04:59:40,791 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 04:59:40,792 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:40,792 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061180792"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061180792"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061180792"}]},"ts":"1701061180792"} 2023-11-27 04:59:40,795 INFO [PEWorker-3] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE; OpenRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 04:59:40,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-11-27 04:59:40,947 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:40,953 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 04:59:40,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d48041eaba6bc404a22a735fb3000dd, NAME => 'TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.', STARTKEY => '', ENDKEY => ''} 2023-11-27 04:59:40,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestQuotaAdmin1 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:40,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,955 INFO [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,957 DEBUG [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/cf 2023-11-27 04:59:40,957 DEBUG [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/cf 2023-11-27 04:59:40,958 INFO [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d48041eaba6bc404a22a735fb3000dd columnFamilyName cf 2023-11-27 04:59:40,959 INFO [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] regionserver.HStore(310): Store=2d48041eaba6bc404a22a735fb3000dd/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:40,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 04:59:40,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:40,974 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2d48041eaba6bc404a22a735fb3000dd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=69870240, jitterRate=0.041147708892822266}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 04:59:40,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2d48041eaba6bc404a22a735fb3000dd: 2023-11-27 04:59:40,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd., pid=17, masterSystemTime=1701061180947 2023-11-27 04:59:40,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 04:59:40,977 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 04:59:40,978 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:40,979 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061180978"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1701061180978"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1701061180978"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1701061180978"}]},"ts":"1701061180978"} 2023-11-27 04:59:40,984 INFO [PEWorker-5] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=17, resume processing ppid=16 2023-11-27 04:59:40,984 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=17, ppid=16, state=SUCCESS; OpenRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41841,1701061176322 in 186 msec 2023-11-27 04:59:40,987 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=16, resume processing ppid=15 2023-11-27 04:59:40,987 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, ASSIGN in 347 msec 2023-11-27 04:59:40,988 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestQuotaAdmin1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-11-27 04:59:40,989 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061180988"}]},"ts":"1701061180988"} 2023-11-27 04:59:40,990 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin1, state=ENABLED in hbase:meta 2023-11-27 04:59:40,994 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestQuotaAdmin1 execute state=CREATE_TABLE_POST_OPERATION 2023-11-27 04:59:40,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=15, state=SUCCESS; CreateTableProcedure table=TestQuotaAdmin1 in 430 msec 2023-11-27 04:59:41,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-11-27 04:59:41,323 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestQuotaAdmin1, procId: 15 completed 2023-11-27 04:59:41,323 DEBUG [Listener at localhost/34689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table TestQuotaAdmin1 get assigned. Timeout = 60000ms 2023-11-27 04:59:41,324 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:41,327 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3484): All regions for table TestQuotaAdmin1 assigned to meta. Checking AM states. 2023-11-27 04:59:41,327 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:41,327 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3504): All regions for table TestQuotaAdmin1 assigned. 2023-11-27 04:59:41,327 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [30,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:41,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestQuotaAdmin2', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:41,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestQuotaAdmin2 2023-11-27 04:59:41,342 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestQuotaAdmin2 execute state=CREATE_TABLE_PRE_OPERATION 2023-11-27 04:59:41,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestQuotaAdmin2" procId is: 18 2023-11-27 04:59:41,343 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestQuotaAdmin2 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-11-27 04:59:41,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-11-27 04:59:41,346 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:41,347 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654 empty. 2023-11-27 04:59:41,347 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:41,347 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestQuotaAdmin2 regions 2023-11-27 04:59:41,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-11-27 04:59:41,769 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:41,771 INFO [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(7675): creating {ENCODED => 84bdb1fdf146da2514bc4d0d11f47654, NAME => 'TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestQuotaAdmin2', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp 2023-11-27 04:59:41,784 DEBUG [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(866): Instantiated TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:41,784 DEBUG [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(1604): Closing 84bdb1fdf146da2514bc4d0d11f47654, disabling compactions & flushes 2023-11-27 04:59:41,784 INFO [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(1626): Closing region TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 04:59:41,784 DEBUG [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 04:59:41,784 DEBUG [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(1714): Acquired close lock on TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. after waiting 0 ms 2023-11-27 04:59:41,784 DEBUG [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(1724): Updates disabled for region TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 04:59:41,785 INFO [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(1838): Closed TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 04:59:41,785 DEBUG [RegionOpenAndInit-TestQuotaAdmin2-pool-0] regionserver.HRegion(1558): Region close journal for 84bdb1fdf146da2514bc4d0d11f47654: 2023-11-27 04:59:41,788 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestQuotaAdmin2 execute state=CREATE_TABLE_ADD_TO_META 2023-11-27 04:59:41,790 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061181790"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061181790"}]},"ts":"1701061181790"} 2023-11-27 04:59:41,792 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-11-27 04:59:41,793 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestQuotaAdmin2 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-11-27 04:59:41,793 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin2","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061181793"}]},"ts":"1701061181793"} 2023-11-27 04:59:41,795 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin2, state=ENABLING in hbase:meta 2023-11-27 04:59:41,799 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 04:59:41,800 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 04:59:41,800 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 04:59:41,800 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-11-27 04:59:41,800 INFO [PEWorker-3] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestQuotaAdmin2, region=84bdb1fdf146da2514bc4d0d11f47654, ASSIGN}] 2023-11-27 04:59:41,802 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestQuotaAdmin2, region=84bdb1fdf146da2514bc4d0d11f47654, ASSIGN 2023-11-27 04:59:41,803 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestQuotaAdmin2, region=84bdb1fdf146da2514bc4d0d11f47654, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41841,1701061176322; forceNewPlan=false, retain=false 2023-11-27 04:59:41,953 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 04:59:41,954 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=84bdb1fdf146da2514bc4d0d11f47654, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:41,954 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061181954"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061181954"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061181954"}]},"ts":"1701061181954"} 2023-11-27 04:59:41,957 INFO [PEWorker-5] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=20, ppid=19, state=RUNNABLE; OpenRegionProcedure 84bdb1fdf146da2514bc4d0d11f47654, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 04:59:42,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-11-27 04:59:42,109 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:42,113 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 04:59:42,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 84bdb1fdf146da2514bc4d0d11f47654, NAME => 'TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.', STARTKEY => '', ENDKEY => ''} 2023-11-27 04:59:42,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestQuotaAdmin2 84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:42,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:42,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:42,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:42,116 INFO [StoreOpener-84bdb1fdf146da2514bc4d0d11f47654-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:42,118 DEBUG [StoreOpener-84bdb1fdf146da2514bc4d0d11f47654-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/cf 2023-11-27 04:59:42,119 DEBUG [StoreOpener-84bdb1fdf146da2514bc4d0d11f47654-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/cf 2023-11-27 04:59:42,119 INFO [StoreOpener-84bdb1fdf146da2514bc4d0d11f47654-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 84bdb1fdf146da2514bc4d0d11f47654 columnFamilyName cf 2023-11-27 04:59:42,120 INFO [StoreOpener-84bdb1fdf146da2514bc4d0d11f47654-1] regionserver.HStore(310): Store=84bdb1fdf146da2514bc4d0d11f47654/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:42,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:42,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:42,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 04:59:42,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:42,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 84bdb1fdf146da2514bc4d0d11f47654; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=68637503, jitterRate=0.022778496146202087}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 04:59:42,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 84bdb1fdf146da2514bc4d0d11f47654: 2023-11-27 04:59:42,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654., pid=20, masterSystemTime=1701061182109 2023-11-27 04:59:42,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 04:59:42,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 04:59:42,135 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=84bdb1fdf146da2514bc4d0d11f47654, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:42,136 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061182135"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1701061182135"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1701061182135"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1701061182135"}]},"ts":"1701061182135"} 2023-11-27 04:59:42,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=20, resume processing ppid=19 2023-11-27 04:59:42,140 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=20, ppid=19, state=SUCCESS; OpenRegionProcedure 84bdb1fdf146da2514bc4d0d11f47654, server=jenkins-hbase4.apache.org,41841,1701061176322 in 181 msec 2023-11-27 04:59:42,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=19, resume processing ppid=18 2023-11-27 04:59:42,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=TestQuotaAdmin2, region=84bdb1fdf146da2514bc4d0d11f47654, ASSIGN in 340 msec 2023-11-27 04:59:42,146 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestQuotaAdmin2 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-11-27 04:59:42,146 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin2","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061182146"}]},"ts":"1701061182146"} 2023-11-27 04:59:42,147 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin2, state=ENABLED in hbase:meta 2023-11-27 04:59:42,151 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestQuotaAdmin2 execute state=CREATE_TABLE_POST_OPERATION 2023-11-27 04:59:42,153 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=18, state=SUCCESS; CreateTableProcedure table=TestQuotaAdmin2 in 812 msec 2023-11-27 04:59:42,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-11-27 04:59:42,848 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestQuotaAdmin2, procId: 18 completed 2023-11-27 04:59:42,848 DEBUG [Listener at localhost/34689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table TestQuotaAdmin2 get assigned. Timeout = 60000ms 2023-11-27 04:59:42,849 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:42,852 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3484): All regions for table TestQuotaAdmin2 assigned to meta. Checking AM states. 2023-11-27 04:59:42,852 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:42,852 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3504): All regions for table TestQuotaAdmin2 assigned. 2023-11-27 04:59:42,852 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [30,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:42,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'TestNs'} 2023-11-27 04:59:42,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=21, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=TestNs 2023-11-27 04:59:42,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-11-27 04:59:42,881 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-11-27 04:59:42,887 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=21, state=SUCCESS; CreateNamespaceProcedure, namespace=TestNs in 18 msec 2023-11-27 04:59:43,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-11-27 04:59:43,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestNs:TestTable', {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-11-27 04:59:43,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=22, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestNs:TestTable 2023-11-27 04:59:43,131 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=22, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestNs:TestTable execute state=CREATE_TABLE_PRE_OPERATION 2023-11-27 04:59:43,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "TestNs" qualifier: "TestTable" procId is: 22 2023-11-27 04:59:43,132 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=22, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestNs:TestTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-11-27 04:59:43,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-11-27 04:59:43,135 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,135 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,136 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138 empty. 2023-11-27 04:59:43,136 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17 empty. 2023-11-27 04:59:43,137 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,137 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,137 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived TestNs:TestTable regions 2023-11-27 04:59:43,153 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/.tabledesc/.tableinfo.0000000001 2023-11-27 04:59:43,155 INFO [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => af1d13366c8d51157b132094b9c56138, NAME => 'TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.', STARTKEY => '', ENDKEY => '1'}, tableDescriptor='TestNs:TestTable', {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp 2023-11-27 04:59:43,155 INFO [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7197a56a05fdba581e9677273ff1da17, NAME => 'TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.', STARTKEY => '1', ENDKEY => ''}, tableDescriptor='TestNs:TestTable', {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp 2023-11-27 04:59:43,174 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(866): Instantiated TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:43,174 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(1604): Closing 7197a56a05fdba581e9677273ff1da17, disabling compactions & flushes 2023-11-27 04:59:43,175 INFO [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(1626): Closing region TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 04:59:43,175 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 04:59:43,175 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(1714): Acquired close lock on TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. after waiting 0 ms 2023-11-27 04:59:43,175 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(1724): Updates disabled for region TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 04:59:43,175 INFO [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(1838): Closed TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 04:59:43,175 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-1] regionserver.HRegion(1558): Region close journal for 7197a56a05fdba581e9677273ff1da17: 2023-11-27 04:59:43,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-11-27 04:59:43,452 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-11-27 04:59:43,545 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestQuotaAdmin0' 2023-11-27 04:59:43,546 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-11-27 04:59:43,547 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-11-27 04:59:43,547 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestQuotaAdmin2' 2023-11-27 04:59:43,548 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-11-27 04:59:43,549 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestQuotaAdmin1' 2023-11-27 04:59:43,575 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(866): Instantiated TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:43,576 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(1604): Closing af1d13366c8d51157b132094b9c56138, disabling compactions & flushes 2023-11-27 04:59:43,576 INFO [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(1626): Closing region TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 04:59:43,576 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 04:59:43,576 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(1714): Acquired close lock on TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. after waiting 0 ms 2023-11-27 04:59:43,576 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(1724): Updates disabled for region TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 04:59:43,576 INFO [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(1838): Closed TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 04:59:43,576 DEBUG [RegionOpenAndInit-TestNs:TestTable-pool-0] regionserver.HRegion(1558): Region close journal for af1d13366c8d51157b132094b9c56138: 2023-11-27 04:59:43,581 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=22, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestNs:TestTable execute state=CREATE_TABLE_ADD_TO_META 2023-11-27 04:59:43,583 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061183583"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061183583"}]},"ts":"1701061183583"} 2023-11-27 04:59:43,583 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061183583"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061183583"}]},"ts":"1701061183583"} 2023-11-27 04:59:43,590 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 2 regions to meta. 2023-11-27 04:59:43,591 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=22, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestNs:TestTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-11-27 04:59:43,591 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestNs:TestTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061183591"}]},"ts":"1701061183591"} 2023-11-27 04:59:43,593 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=TestNs:TestTable, state=ENABLING in hbase:meta 2023-11-27 04:59:43,597 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 04:59:43,599 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 04:59:43,599 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 04:59:43,599 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-11-27 04:59:43,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=23, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestNs:TestTable, region=af1d13366c8d51157b132094b9c56138, ASSIGN}, {pid=24, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestNs:TestTable, region=7197a56a05fdba581e9677273ff1da17, ASSIGN}] 2023-11-27 04:59:43,601 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestNs:TestTable, region=7197a56a05fdba581e9677273ff1da17, ASSIGN 2023-11-27 04:59:43,601 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestNs:TestTable, region=af1d13366c8d51157b132094b9c56138, ASSIGN 2023-11-27 04:59:43,602 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestNs:TestTable, region=7197a56a05fdba581e9677273ff1da17, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41853,1701061176279; forceNewPlan=false, retain=false 2023-11-27 04:59:43,603 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=22, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestNs:TestTable, region=af1d13366c8d51157b132094b9c56138, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41841,1701061176322; forceNewPlan=false, retain=false 2023-11-27 04:59:43,752 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-11-27 04:59:43,754 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7197a56a05fdba581e9677273ff1da17, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:43,754 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=af1d13366c8d51157b132094b9c56138, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:43,754 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061183753"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061183753"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061183753"}]},"ts":"1701061183753"} 2023-11-27 04:59:43,754 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061183753"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061183753"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061183753"}]},"ts":"1701061183753"} 2023-11-27 04:59:43,757 INFO [PEWorker-1] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=25, ppid=23, state=RUNNABLE; OpenRegionProcedure af1d13366c8d51157b132094b9c56138, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 04:59:43,758 INFO [PEWorker-4] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=26, ppid=24, state=RUNNABLE; OpenRegionProcedure 7197a56a05fdba581e9677273ff1da17, server=jenkins-hbase4.apache.org,41853,1701061176279}] 2023-11-27 04:59:43,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-11-27 04:59:43,909 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:43,912 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:43,912 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(122): Using SIMPLE authentication for service=AdminService, sasl=false 2023-11-27 04:59:43,915 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-11-27 04:59:43,915 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 04:59:43,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => af1d13366c8d51157b132094b9c56138, NAME => 'TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.', STARTKEY => '', ENDKEY => '1'} 2023-11-27 04:59:43,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestTable af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:43,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,919 INFO [StoreOpener-af1d13366c8d51157b132094b9c56138-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,920 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 04:59:43,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7197a56a05fdba581e9677273ff1da17, NAME => 'TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.', STARTKEY => '1', ENDKEY => ''} 2023-11-27 04:59:43,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestTable 7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 04:59:43,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,922 DEBUG [StoreOpener-af1d13366c8d51157b132094b9c56138-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/cf 2023-11-27 04:59:43,922 DEBUG [StoreOpener-af1d13366c8d51157b132094b9c56138-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/cf 2023-11-27 04:59:43,922 INFO [StoreOpener-af1d13366c8d51157b132094b9c56138-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region af1d13366c8d51157b132094b9c56138 columnFamilyName cf 2023-11-27 04:59:43,923 INFO [StoreOpener-7197a56a05fdba581e9677273ff1da17-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,923 INFO [StoreOpener-af1d13366c8d51157b132094b9c56138-1] regionserver.HStore(310): Store=af1d13366c8d51157b132094b9c56138/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:43,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,926 DEBUG [StoreOpener-7197a56a05fdba581e9677273ff1da17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/cf 2023-11-27 04:59:43,926 DEBUG [StoreOpener-7197a56a05fdba581e9677273ff1da17-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/cf 2023-11-27 04:59:43,926 INFO [StoreOpener-7197a56a05fdba581e9677273ff1da17-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7197a56a05fdba581e9677273ff1da17 columnFamilyName cf 2023-11-27 04:59:43,927 INFO [StoreOpener-7197a56a05fdba581e9677273ff1da17-1] regionserver.HStore(310): Store=7197a56a05fdba581e9677273ff1da17/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 04:59:43,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for af1d13366c8d51157b132094b9c56138 2023-11-27 04:59:43,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7197a56a05fdba581e9677273ff1da17 2023-11-27 04:59:43,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:43,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened af1d13366c8d51157b132094b9c56138; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=64119717, jitterRate=-0.04454176127910614}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 04:59:43,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for af1d13366c8d51157b132094b9c56138: 2023-11-27 04:59:43,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138., pid=25, masterSystemTime=1701061183909 2023-11-27 04:59:43,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-11-27 04:59:43,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7197a56a05fdba581e9677273ff1da17; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=71762192, jitterRate=0.06933999061584473}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 04:59:43,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7197a56a05fdba581e9677273ff1da17: 2023-11-27 04:59:43,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 04:59:43,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 04:59:43,941 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., pid=26, masterSystemTime=1701061183912 2023-11-27 04:59:43,942 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=af1d13366c8d51157b132094b9c56138, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 04:59:43,942 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061183942"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1701061183942"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1701061183942"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1701061183942"}]},"ts":"1701061183942"} 2023-11-27 04:59:43,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 04:59:43,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 04:59:43,945 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7197a56a05fdba581e9677273ff1da17, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 04:59:43,945 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061183945"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1701061183945"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1701061183945"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1701061183945"}]},"ts":"1701061183945"} 2023-11-27 04:59:43,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=25, resume processing ppid=23 2023-11-27 04:59:43,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=25, ppid=23, state=SUCCESS; OpenRegionProcedure af1d13366c8d51157b132094b9c56138, server=jenkins-hbase4.apache.org,41841,1701061176322 in 188 msec 2023-11-27 04:59:43,950 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=23, ppid=22, state=SUCCESS; TransitRegionStateProcedure table=TestNs:TestTable, region=af1d13366c8d51157b132094b9c56138, ASSIGN in 349 msec 2023-11-27 04:59:43,950 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=26, resume processing ppid=24 2023-11-27 04:59:43,951 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=26, ppid=24, state=SUCCESS; OpenRegionProcedure 7197a56a05fdba581e9677273ff1da17, server=jenkins-hbase4.apache.org,41853,1701061176279 in 189 msec 2023-11-27 04:59:43,952 INFO [PEWorker-5] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=24, resume processing ppid=22 2023-11-27 04:59:43,953 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=24, ppid=22, state=SUCCESS; TransitRegionStateProcedure table=TestNs:TestTable, region=7197a56a05fdba581e9677273ff1da17, ASSIGN in 351 msec 2023-11-27 04:59:43,953 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=22, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestNs:TestTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-11-27 04:59:43,954 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestNs:TestTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061183954"}]},"ts":"1701061183954"} 2023-11-27 04:59:43,955 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestNs:TestTable, state=ENABLED in hbase:meta 2023-11-27 04:59:43,959 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=22, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestNs:TestTable execute state=CREATE_TABLE_POST_OPERATION 2023-11-27 04:59:43,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=22, state=SUCCESS; CreateTableProcedure table=TestNs:TestTable in 831 msec 2023-11-27 04:59:44,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-11-27 04:59:44,639 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: TestNs:TestTable, procId: 22 completed 2023-11-27 04:59:44,639 DEBUG [Listener at localhost/34689] hbase.HBaseTestingUtility(3430): Waiting until all regions of table TestNs:TestTable get assigned. Timeout = 60000ms 2023-11-27 04:59:44,639 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:44,644 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3484): All regions for table TestNs:TestTable assigned to meta. Checking AM states. 2023-11-27 04:59:44,644 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:44,645 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(3504): All regions for table TestNs:TestTable assigned. 2023-11-27 04:59:44,645 INFO [Listener at localhost/34689] hbase.Waiter(180): Waiting up to [30,000] milli-secs(wait.for.ratio=[1]) 2023-11-27 04:59:44,655 DEBUG [hconnection-0x67fa6ab4-shared-pool-2] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 04:59:44,659 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36762, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 04:59:44,661 DEBUG [regionserver/jenkins-hbase4:0.Chore.1] ipc.RpcConnection(122): Using SIMPLE authentication for service=MasterService, sasl=false 2023-11-27 04:59:44,665 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43828, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=MasterService 2023-11-27 04:59:44,672 DEBUG [hconnection-0x63a58a6e-metaLookup-shared--pool-0] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 04:59:44,682 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=ClientService 2023-11-27 04:59:44,692 INFO [Listener at localhost/34689] hbase.ResourceChecker(147): before: quotas.TestClusterScopeQuotaThrottle#testUserTableClusterScopeQuota Thread=302, OpenFileDescriptor=630, MaxFileDescriptor=60000, SystemLoadAverage=160, ProcessCount=168, AvailableMemoryMB=7992 2023-11-27 04:59:44,978 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 04:59:44,979 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {TestNs=QuotaState(ts=1701064784728 bypass)} 2023-11-27 04:59:44,979 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestNs:TestTable=QuotaState(ts=1701064784728 bypass)} 2023-11-27 04:59:44,979 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1701064784728 [ TestNs:TestTable ])} 2023-11-27 04:59:44,979 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1701064784728 bypass)} 2023-11-27 04:59:45,229 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 04:59:45,229 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1701064784728 bypass), TestNs=QuotaState(ts=1701064784728 bypass)} 2023-11-27 04:59:45,229 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1701064784728 bypass), TestNs:TestTable=QuotaState(ts=1701064784728 bypass), TestQuotaAdmin2=QuotaState(ts=1701064784728 bypass), TestQuotaAdmin1=QuotaState(ts=1701064784728 bypass)} 2023-11-27 04:59:45,230 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1701064784728 [ TestNs:TestTable ])} 2023-11-27 04:59:45,230 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1701064784728 bypass)} 2023-11-27 04:59:45,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:45,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 98 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:45,286 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-11-27 04:59:45,287 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-11-27 04:59:45,287 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-11-27 04:59:45,287 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-11-27 04:59:45,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:45,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 100 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:45,515 INFO [regionserver/jenkins-hbase4:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. because 708094d2c6013f8353947ca009f33ef1/info has an old edit so flush to free WALs after random delay 177107 ms 2023-11-27 04:59:45,515 INFO [regionserver/jenkins-hbase4:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. because be5ef4f3dfb2c43b447798061e19f02f/q has an old edit so flush to free WALs after random delay 256380 ms 2023-11-27 04:59:45,515 INFO [regionserver/jenkins-hbase4:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because 1588230740/info has an old edit so flush to free WALs after random delay 40414 ms 2023-11-27 04:59:46,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:46,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 102 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:46,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:46,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 104 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:48,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:48,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 106 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:49,418 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-11-27 04:59:49,544 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestNs:TestTable' 2023-11-27 04:59:50,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:50,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 108 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:55,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:55,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 110 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:55,575 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-10' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:224) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 04:59:55,576 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(100): get failed after nRetries=10 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10024: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-10' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:224) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 04:59:55,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:55,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 111 service: ClientService methodName: Get size: 115 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:55,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:55,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 113 service: ClientService methodName: Get size: 115 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:56,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:56,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 115 service: ClientService methodName: Get size: 115 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:57,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:57,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 117 service: ClientService methodName: Get size: 115 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:58,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 04:59:58,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 119 service: ClientService methodName: Get size: 115 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:00:00,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:00:00,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 121 service: ClientService methodName: Get size: 115 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:00:05,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:00:05,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 123 service: ClientService methodName: Get size: 115 connection: 172.31.14.131:36762 deadline: 1701064794728, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:00:05,871 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-0' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:224) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:00:05,871 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(100): get failed after nRetries=0 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10037: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-0' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:224) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:00:05,939 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: info of 1588230740 because time of oldest edit=1701061178254 is > 3600000 from now =1701068384728 2023-11-27 05:00:05,939 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: table of 1588230740 because time of oldest edit=1701061178270 is > 3600000 from now =1701068384728 2023-11-27 05:00:05,940 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 2/3 column families, dataSize=8.78 KB heapSize=15.55 KB; info={dataSize=8.24 KB, heapSize=13.65 KB, offHeapSize=0 B}; table={dataSize=558 B, heapSize=1.66 KB, offHeapSize=0 B} 2023-11-27 05:00:06,012 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=8.24 KB at sequenceid=36 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/info/eaf2f890b1614a5e81692d645766ff67 2023-11-27 05:00:06,071 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=558 B at sequenceid=36 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/table/73e1abe8057642f2a264ca210b93498e 2023-11-27 05:00:06,079 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/info/eaf2f890b1614a5e81692d645766ff67 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info/eaf2f890b1614a5e81692d645766ff67 2023-11-27 05:00:06,087 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info/eaf2f890b1614a5e81692d645766ff67, entries=70, sequenceid=36, filesize=13.1 K 2023-11-27 05:00:06,089 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/table/73e1abe8057642f2a264ca210b93498e as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table/73e1abe8057642f2a264ca210b93498e 2023-11-27 05:00:06,096 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table/73e1abe8057642f2a264ca210b93498e, entries=12, sequenceid=36, filesize=5.1 K 2023-11-27 05:00:06,099 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~8.78 KB/8992, heapSize ~15.27 KB/15640, currentSize=0 B/0 for 1588230740 in 0ms, sequenceid=36, compaction requested=false 2023-11-27 05:00:06,101 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-11-27 05:00:06,101 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 708094d2c6013f8353947ca009f33ef1 1/1 column families, dataSize=117 B heapSize=600 B 2023-11-27 05:00:06,121 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=117 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/.tmp/info/0cbf51f8ba3a48c3abfb51cb1c9c0a1c 2023-11-27 05:00:06,128 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:00:06,128 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {TestNs=QuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,128 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestNs:TestTable=QuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,128 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,128 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,129 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/.tmp/info/0cbf51f8ba3a48c3abfb51cb1c9c0a1c as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info/0cbf51f8ba3a48c3abfb51cb1c9c0a1c 2023-11-27 05:00:06,137 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info/0cbf51f8ba3a48c3abfb51cb1c9c0a1c, entries=3, sequenceid=7, filesize=4.9 K 2023-11-27 05:00:06,138 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~117 B/117, heapSize ~584 B/584, currentSize=0 B/0 for 708094d2c6013f8353947ca009f33ef1 in 0ms, sequenceid=7, compaction requested=false 2023-11-27 05:00:06,138 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 708094d2c6013f8353947ca009f33ef1: 2023-11-27 05:00:06,139 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: q of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1701061184724 is > 3600000 from now =1701068384728 2023-11-27 05:00:06,139 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing be5ef4f3dfb2c43b447798061e19f02f 1/2 column families, dataSize=122 B heapSize=856 B; q={dataSize=92 B, heapSize=496 B, offHeapSize=0 B} 2023-11-27 05:00:06,156 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/cebb3f4ad3fc4e3ca2cce2152e1bdf60 2023-11-27 05:00:06,163 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cebb3f4ad3fc4e3ca2cce2152e1bdf60 2023-11-27 05:00:06,164 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/cebb3f4ad3fc4e3ca2cce2152e1bdf60 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/cebb3f4ad3fc4e3ca2cce2152e1bdf60 2023-11-27 05:00:06,170 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cebb3f4ad3fc4e3ca2cce2152e1bdf60 2023-11-27 05:00:06,170 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/cebb3f4ad3fc4e3ca2cce2152e1bdf60, entries=1, sequenceid=6, filesize=4.9 K 2023-11-27 05:00:06,171 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~92 B/92, heapSize ~480 B/480, currentSize=30 B/30 for be5ef4f3dfb2c43b447798061e19f02f in 0ms, sequenceid=6, compaction requested=false 2023-11-27 05:00:06,172 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 05:00:06,379 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:00:06,379 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1701068384728 bypass), TestNs=QuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,379 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1701068384728 bypass), TestNs:TestTable=QuotaState(ts=1701068384728 bypass), TestQuotaAdmin2=QuotaState(ts=1701068384728 bypass), TestQuotaAdmin1=QuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,379 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,379 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1701068384728 bypass)} 2023-11-27 05:00:06,395 INFO [Listener at localhost/34689] hbase.ResourceChecker(175): after: quotas.TestClusterScopeQuotaThrottle#testUserTableClusterScopeQuota Thread=293 (was 302), OpenFileDescriptor=623 (was 630), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=122 (was 160), ProcessCount=171 (was 168) - ProcessCount LEAK? -, AvailableMemoryMB=7900 (was 7992) 2023-11-27 05:00:06,406 INFO [Listener at localhost/34689] hbase.ResourceChecker(147): before: quotas.TestClusterScopeQuotaThrottle#testUserNamespaceClusterScopeQuota Thread=293, OpenFileDescriptor=623, MaxFileDescriptor=60000, SystemLoadAverage=122, ProcessCount=171, AvailableMemoryMB=7899 2023-11-27 05:00:06,521 INFO [regionserver/jenkins-hbase4:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. because be5ef4f3dfb2c43b447798061e19f02f/q has an old edit so flush to free WALs after random delay 71298 ms 2023-11-27 05:00:06,697 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701072169928","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":11500,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":600,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:06,698 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701072231728","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":14300,"client":"172.31.14.131:53770","queuetimems":700,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,716 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701072485028","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":13100,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,773 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: q of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1701068384728 is > 3600000 from now =1701073017528 2023-11-27 05:00:06,773 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: u of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1701064784728 is > 3600000 from now =1701073017528 2023-11-27 05:00:06,773 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing be5ef4f3dfb2c43b447798061e19f02f 2/2 column families, dataSize=146 B heapSize=880 B 2023-11-27 05:00:06,811 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701073386928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":11600,"client":"172.31.14.131:53770","queuetimems":0,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,819 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=116 B at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/d4f1001ca3fe43239cabd862cdae3d14 2023-11-27 05:00:06,866 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=30 B at sequenceid=11 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/u/7f0945571f9b459089fb5fa97732d1e1 2023-11-27 05:00:06,878 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7f0945571f9b459089fb5fa97732d1e1 2023-11-27 05:00:06,880 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/d4f1001ca3fe43239cabd862cdae3d14 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/d4f1001ca3fe43239cabd862cdae3d14 2023-11-27 05:00:06,890 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/d4f1001ca3fe43239cabd862cdae3d14, entries=1, sequenceid=11, filesize=4.8 K 2023-11-27 05:00:06,892 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/u/7f0945571f9b459089fb5fa97732d1e1 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u/7f0945571f9b459089fb5fa97732d1e1 2023-11-27 05:00:06,901 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7f0945571f9b459089fb5fa97732d1e1 2023-11-27 05:00:06,901 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u/7f0945571f9b459089fb5fa97732d1e1, entries=1, sequenceid=11, filesize=4.9 K 2023-11-27 05:00:06,902 WARN [AsyncFSWAL-0-hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0-prefix:jenkins-hbase4.apache.org,41841,1701061176322] wal.MetricsWAL(65): AsyncFSWAL-0-hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0-prefix:jenkins-hbase4.apache.org,41841,1701061176322 took 2100 ms appending an edit to wal; len~=244 2023-11-27 05:00:06,903 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~146 B/146, heapSize ~848 B/848, currentSize=0 B/0 for be5ef4f3dfb2c43b447798061e19f02f in 1258800ms, sequenceid=11, compaction requested=false 2023-11-27 05:00:06,903 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 05:00:06,904 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701074183028","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":126700,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,925 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701074443528","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":10200,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":800,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:06,926 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701074477328","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":12100,"client":"172.31.14.131:53770","queuetimems":500,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,937 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701074626728","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":12900,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,970 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701075090528","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":11300,"client":"172.31.14.131:53770","queuetimems":500,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,979 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.RpcServer(528): (responseTooSlow): {"call":"GetClusterStatus(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest)","starttimems":"1701075170028","responsesize":"339","method":"GetClusterStatus","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest","processingtimems":11400,"client":"172.31.14.131:43828","queuetimems":800,"class":"HMaster"} 2023-11-27 05:00:06,982 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701075254828","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":12300,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":600,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:06,983 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701075299528","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":17600,"client":"172.31.14.131:53770","queuetimems":500,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:06,999 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.RpcServer(528): (responseTooSlow): {"call":"GetClusterStatus(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest)","starttimems":"1701075398228","responsesize":"339","method":"GetClusterStatus","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest","processingtimems":12600,"client":"172.31.14.131:43828","queuetimems":600,"class":"HMaster"} 2023-11-27 05:00:07,002 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701075445328","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=n.TestNs","processingtimems":10700,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,005 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701075515128","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":15700,"client":"172.31.14.131:53770","queuetimems":900,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,022 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701075716428","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10300,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,037 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701075852728","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=n.TestNs","processingtimems":14400,"client":"172.31.14.131:53770","queuetimems":700,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,040 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701075928928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":21500,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,055 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701076110928","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":12300,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":500,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:07,057 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701076162828","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":15500,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,076 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.RpcServer(528): (responseTooSlow): {"call":"GetClusterStatus(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest)","starttimems":"1701076344428","responsesize":"339","method":"GetClusterStatus","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest","processingtimems":15000,"client":"172.31.14.131:43828","queuetimems":700,"class":"HMaster"} 2023-11-27 05:00:07,081 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701076459228","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":18700,"client":"172.31.14.131:53770","queuetimems":900,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,098 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.RpcServer(528): (responseTooSlow): {"call":"GetClusterStatus(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest)","starttimems":"1701076574028","responsesize":"339","method":"GetClusterStatus","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest","processingtimems":11800,"client":"172.31.14.131:43828","queuetimems":600,"class":"HMaster"} 2023-11-27 05:00:07,103 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701076675928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":16700,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,121 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701076850828","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":11200,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":600,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:07,122 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701076893928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":12700,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,137 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701077052928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10700,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,147 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701077184228","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":11500,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,156 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701077323028","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":11000,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,178 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701077641028","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":11000,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,198 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701077960928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":16000,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,211 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701078147028","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":12100,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":1000,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:07,213 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701078188028","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":14400,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,224 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701078364928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10300,"client":"172.31.14.131:53770","queuetimems":500,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,253 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701078778228","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":13600,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":500,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:07,255 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701078824828","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":17900,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,267 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701078980428","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":11000,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":900,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:07,269 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701079016228","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10400,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,281 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1701079159528","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestQuotaAdmin0","processingtimems":10400,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":1000,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:00:07,282 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701079200328","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":16200,"client":"172.31.14.131:53770","queuetimems":900,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,388 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701080715928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":14800,"client":"172.31.14.131:53770","queuetimems":900,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,406 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701080881228","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10800,"client":"172.31.14.131:53770","queuetimems":700,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,416 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701081041228","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":12900,"client":"172.31.14.131:53770","queuetimems":900,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,426 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701081169328","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":16100,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,456 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1701081667628","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":13300,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:00:07,495 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 479 service: ClientService methodName: Get size: 130 connection: 172.31.14.131:53770 deadline: 1701082207428 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=t.TestNs:TestTable connection: 172.31.14.131:53770 2023-11-27 05:00:29,581 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:00:29,581 INFO [regionserver/jenkins-hbase4:0.Chore.2] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:00:29,585 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 4007ms GC pool 'PS MarkSweep' had collection(s): count=1 time=3460ms GC pool 'PS Scavenge' had collection(s): count=1 time=694ms 2023-11-27 05:00:29,585 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 4007ms GC pool 'PS MarkSweep' had collection(s): count=1 time=3460ms GC pool 'PS Scavenge' had collection(s): count=1 time=694ms 2023-11-27 05:00:52,971 INFO [regionserver/jenkins-hbase4:0.Chore.2] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:00:52,971 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 4669ms GC pool 'PS MarkSweep' had collection(s): count=1 time=4686ms GC pool 'PS Scavenge' had collection(s): count=1 time=482ms 2023-11-27 05:00:52,971 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 4669ms GC pool 'PS MarkSweep' had collection(s): count=1 time=4686ms GC pool 'PS Scavenge' had collection(s): count=1 time=482ms 2023-11-27 05:00:52,971 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:00:53,870 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)","starttimems":"1702429525428","responsesize":"93","method":"Scan","param":"region { type: REGION_NAME value: \"hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.\" } scan { column { family: \"q\" } time_range { from: 0 ","processingtimems":26000,"client":"172.31.14.131:50776","queuetimems":1100,"class":"MiniHBaseClusterRegionServer","scandetails":"table: hbase:quota region: hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f."} 2023-11-27 05:01:04,924 INFO [regionserver/jenkins-hbase4:0.Chore.3] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:01:04,925 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 9451ms GC pool 'PS MarkSweep' had collection(s): count=1 time=9694ms 2023-11-27 05:01:04,924 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:01:04,925 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 9451ms GC pool 'PS MarkSweep' had collection(s): count=1 time=9694ms 2023-11-27 05:01:04,929 WARN [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33323] ipc.CallRunner(105): Dropping timed out call: callId: 655 service: RegionServerStatusService methodName: RegionServerReport size: 292 connection: 172.31.14.131:42035 deadline: 1702483522928 param: server host_name: "jenkins-hbase4.apache.org" port: 41853 start_code: 1701061176279 load { numberOfRequests: 0 } connection: 172.31.14.131:42035 2023-11-27 05:01:04,929 WARN [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33323] ipc.CallRunner(105): Dropping timed out call: callId: 660 service: RegionServerStatusService methodName: RegionServerReport size: 1.2 K connection: 172.31.14.131:48919 deadline: 1702483527728 param: server host_name: "jenkins-hbase4.apache.org" port: 41841 start_code: 1701061176322 load { numberOfRequests: 0 } connection: 172.31.14.131:48919 2023-11-27 05:01:12,574 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 5147ms GC pool 'PS MarkSweep' had collection(s): count=1 time=5380ms 2023-11-27 05:01:12,574 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:01:12,574 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 5147ms GC pool 'PS MarkSweep' had collection(s): count=1 time=5380ms 2023-11-27 05:01:12,574 INFO [regionserver/jenkins-hbase4:0.Chore.3] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:01:12,584 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.RpcServer(528): (responseTooSlow): {"call":"ReportRegionSpaceUse(org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionSpaceUseReportRequest)","starttimems":"1702573406828","responsesize":"0","method":"ReportRegionSpaceUse","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionSpaceUseReportRequest","processingtimems":82100,"client":"172.31.14.131:48919","queuetimems":2400,"class":"HMaster"} 2023-11-27 05:01:12,584 WARN [regionserver/jenkins-hbase4:0.Chore.1] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read table from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=1491352051: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=479,methodName=Get], waitTime=1491333200ms, rpcTimeout=60000ms row 't.TestNs:TestTable' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchGlobalQuotas(QuotaUtil.java:374) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchTableQuotas(QuotaUtil.java:325) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$2.fetchEntries(QuotaCache.java:257) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchTableQuotaState(QuotaCache.java:249) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:226) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=479,methodName=Get], waitTime=1491333200ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=479,methodName=Get], waitTime=1491333200ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:01:12,591 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)","starttimems":"1702573547228","responsesize":"25","method":"Scan","param":"region { type: REGION_NAME value: \"hbase:meta,,1\" } scan { column { family: \"info\" } start_row: \"hbase:quota,u.jenkins,99999999999999\" stop_row: \"hbas ","processingtimems":124400,"client":"172.31.14.131:53770","queuetimems":900,"class":"MiniHBaseClusterRegionServer","scandetails":"table: hbase:meta region: hbase:meta,,1.1588230740"} 2023-11-27 05:01:12,593 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)","starttimems":"1702573725528","responsesize":"738","method":"Scan","param":"scanner_id: 12873809452058279968 number_of_rows: 1 close_scanner: false next_call_seq: 0 client_handles_partials: true client_handles_heartbeats: true ","processingtimems":41500,"client":"172.31.14.131:53770","queuetimems":600,"class":"MiniHBaseClusterRegionServer","scandetails":"table: hbase:meta region: hbase:meta,,1.1588230740"} 2023-11-27 05:01:12,718 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:01:12,720 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:02:20,223 INFO [regionserver/jenkins-hbase4:0.Chore.2] hbase.ScheduledChore(142): Chore: CompactionThroughputTuner missed its start time 2023-11-27 05:02:20,244 INFO [regionserver/jenkins-hbase4:0.Chore.3] hbase.ScheduledChore(142): Chore: jenkins-hbase4.apache.org,41853,1701061176279-HeapMemoryTunerChore missed its start time 2023-11-27 05:02:46,829 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: SpaceQuotaRefresherChore missed its start time 2023-11-27 05:03:02,229 INFO [regionserver/jenkins-hbase4:0.Chore.3] hbase.ScheduledChore(142): Chore: RegionSizeReportingChore missed its start time 2023-11-27 05:03:02,271 INFO [regionserver/jenkins-hbase4:0.Chore.2] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:03:02,352 INFO [regionserver/jenkins-hbase4:0.Chore.4] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:03:02,960 INFO [regionserver/jenkins-hbase4:0.Chore.5] hbase.ScheduledChore(142): Chore: CompactedHFilesCleaner missed its start time 2023-11-27 05:03:05,180 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51290, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=MasterService 2023-11-27 05:03:05,200 INFO [regionserver/jenkins-hbase4:0.Chore.7] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:03:05,200 INFO [regionserver/jenkins-hbase4:0.Chore.5] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:03:05,204 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702608312928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10500,"client":"172.31.14.131:53770","queuetimems":300,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,216 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] ipc.CallRunner(105): Dropping timed out call: callId: 528 service: MasterService methodName: GetClusterStatus size: 35 connection: 172.31.14.131:51290 deadline: 1702608492028 param: TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest connection: 172.31.14.131:51290 2023-11-27 05:03:05,287 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702609463028","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":11000,"client":"172.31.14.131:53770","queuetimems":400,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,298 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702609556428","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=exceedThrottleQuota","processingtimems":12200,"client":"172.31.14.131:53770","queuetimems":0,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,346 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702610187728","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10400,"client":"172.31.14.131:53770","queuetimems":6800,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,346 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702610187728","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10300,"client":"172.31.14.131:53770","queuetimems":6300,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,408 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] ipc.CallRunner(105): Dropping timed out call: callId: 1190 service: MasterService methodName: IsMasterRunning size: 30 connection: 172.31.14.131:51290 deadline: 1702610959928 param: TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$IsMasterRunningRequest connection: 172.31.14.131:51290 2023-11-27 05:03:05,462 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702611983228","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":10600,"client":"172.31.14.131:53770","queuetimems":700,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,535 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33323] ipc.RpcServer(528): (responseTooSlow): {"call":"GetClusterStatus(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest)","starttimems":"1702612826928","responsesize":"339","method":"GetClusterStatus","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest","processingtimems":12700,"client":"172.31.14.131:51290","queuetimems":500,"class":"HMaster"} 2023-11-27 05:03:05,576 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.CallRunner(105): Dropping timed out call: callId: 1738 service: MasterService methodName: IsMasterRunning size: 30 connection: 172.31.14.131:51290 deadline: 1702613388928 param: TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$IsMasterRunningRequest connection: 172.31.14.131:51290 2023-11-27 05:03:05,584 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.CallRunner(105): Dropping timed out call: callId: 1761 service: MasterService methodName: GetClusterStatus size: 35 connection: 172.31.14.131:51290 deadline: 1702613485628 param: TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest connection: 172.31.14.131:51290 2023-11-27 05:03:05,619 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702613993928","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=exceedThrottleQuota","processingtimems":29200,"client":"172.31.14.131:53770","queuetimems":2200,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,647 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1702614198228","responsesize":"4","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":12800,"client":"172.31.14.131:53770","queuetimems":1600,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:03:05,908 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2720 service: ClientService methodName: Get size: 120 connection: 172.31.14.131:53770 deadline: 1702617541728 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=n.TestNs connection: 172.31.14.131:53770 2023-11-27 05:03:05,908 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2721 service: ClientService methodName: Get size: 120 connection: 172.31.14.131:53770 deadline: 1702617553128 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=n.TestNs connection: 172.31.14.131:53770 2023-11-27 05:03:05,908 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2724 service: ClientService methodName: Get size: 120 connection: 172.31.14.131:53770 deadline: 1702617553428 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=n.TestNs connection: 172.31.14.131:53770 2023-11-27 05:03:05,908 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2725 service: ClientService methodName: Get size: 130 connection: 172.31.14.131:53770 deadline: 1702617554028 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=t.TestNs:TestTable connection: 172.31.14.131:53770 2023-11-27 05:03:18,539 INFO [regionserver/jenkins-hbase4:0.Chore.2] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:03:18,539 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:03:46,364 INFO [regionserver/jenkins-hbase4:0.Chore.3] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:03:46,364 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:03:54,054 INFO [regionserver/jenkins-hbase4:0.Chore.3] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:03:54,054 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:03:54,054 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 5689ms GC pool 'PS MarkSweep' had collection(s): count=1 time=5188ms GC pool 'PS Scavenge' had collection(s): count=1 time=666ms 2023-11-27 05:03:54,054 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 5689ms GC pool 'PS MarkSweep' had collection(s): count=1 time=5188ms GC pool 'PS Scavenge' had collection(s): count=1 time=666ms 2023-11-27 05:03:54,465 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)","starttimems":"1703865014428","responsesize":"93","method":"Scan","param":"region { type: REGION_NAME value: \"hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.\" } scan { column { family: \"q\" } time_range { from: 0 ","processingtimems":19000,"client":"172.31.14.131:50776","queuetimems":900,"class":"MiniHBaseClusterRegionServer","scandetails":"table: hbase:quota region: hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f."} 2023-11-27 05:04:05,036 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 8981ms GC pool 'PS MarkSweep' had collection(s): count=1 time=9175ms 2023-11-27 05:04:05,036 INFO [JvmPauseMonitor] util.JvmPauseMonitor$Monitor(172): Detected pause in JVM or host machine (eg GC): pause of approximately 8980ms GC pool 'PS MarkSweep' had collection(s): count=1 time=9175ms 2023-11-27 05:04:05,036 INFO [regionserver/jenkins-hbase4:0.Chore.2] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:04:05,036 INFO [regionserver/jenkins-hbase4:0.Chore.3] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:04:05,217 WARN [regionserver/jenkins-hbase4:0.Chore.8] quotas.QuotaCache$QuotaRefresherChore(361): Failed to get cluster metrics needed for updating quotas java.net.SocketTimeoutException: callTimeout=1200000, callDuration=1320415451: Call to address=jenkins-hbase4.apache.org:33323 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=528,methodName=GetClusterStatus], waitTime=1320401200ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:2954) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:2946) at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterMetrics(HBaseAdmin.java:2054) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.updateQuotaFactors(QuotaCache.java:359) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:224) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:33323 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=528,methodName=GetClusterStatus], waitTime=1320401200ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=528,methodName=GetClusterStatus], waitTime=1320401200ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:05,406 WARN [regionserver/jenkins-hbase4:0.Chore.6] client.ConnectionImplementation(407): Checking master connection org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:33323 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=1190,methodName=IsMasterRunning], waitTime=1318045700ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=1190,methodName=IsMasterRunning], waitTime=1318045700ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:05,576 WARN [regionserver/jenkins-hbase4:0.Chore.5] client.ConnectionImplementation(407): Checking master connection org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:33323 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=1738,methodName=IsMasterRunning], waitTime=1315622900ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=1738,methodName=IsMasterRunning], waitTime=1315622900ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:05,587 WARN [regionserver/jenkins-hbase4:0.Chore.7] quotas.QuotaCache$QuotaRefresherChore(361): Failed to get cluster metrics needed for updating quotas java.net.SocketTimeoutException: callTimeout=1200000, callDuration=1315521950: Call to address=jenkins-hbase4.apache.org:33323 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=1761,methodName=GetClusterStatus], waitTime=1315517800ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:2954) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:2946) at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterMetrics(HBaseAdmin.java:2054) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.updateQuotaFactors(QuotaCache.java:359) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:224) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:33323 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=1761,methodName=GetClusterStatus], waitTime=1315517800ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=1761,methodName=GetClusterStatus], waitTime=1315517800ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:05,908 WARN [regionserver/jenkins-hbase4:0.Chore.1] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read namespace from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=1311568552: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2720,methodName=Get], waitTime=1311567600ms, rpcTimeout=60000ms row 'n.TestNs' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchGlobalQuotas(QuotaUtil.java:374) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchNamespaceQuotas(QuotaUtil.java:341) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$1.fetchEntries(QuotaCache.java:242) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchNamespaceQuotaState(QuotaCache.java:234) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:225) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2720,methodName=Get], waitTime=1311567600ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2720,methodName=Get], waitTime=1311567600ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:05,908 WARN [regionserver/jenkins-hbase4:0.Chore.3] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read table from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=1311456151: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2725,methodName=Get], waitTime=1311455000ms, rpcTimeout=60000ms row 't.TestNs:TestTable' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchGlobalQuotas(QuotaUtil.java:374) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchTableQuotas(QuotaUtil.java:325) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$2.fetchEntries(QuotaCache.java:257) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchTableQuotaState(QuotaCache.java:249) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:226) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2725,methodName=Get], waitTime=1311455000ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2725,methodName=Get], waitTime=1311455000ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:05,908 WARN [regionserver/jenkins-hbase4:0.Chore.2] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read namespace from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=1311566951: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2721,methodName=Get], waitTime=1311566400ms, rpcTimeout=60000ms row 'n.TestNs' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchGlobalQuotas(QuotaUtil.java:374) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchNamespaceQuotas(QuotaUtil.java:341) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$1.fetchEntries(QuotaCache.java:242) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchNamespaceQuotaState(QuotaCache.java:234) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:225) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2721,methodName=Get], waitTime=1311566400ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2721,methodName=Get], waitTime=1311566400ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:05,908 WARN [regionserver/jenkins-hbase4:0.Chore.4] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read namespace from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=1311508350: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2724,methodName=Get], waitTime=1311508100ms, rpcTimeout=60000ms row 'n.TestNs' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchGlobalQuotas(QuotaUtil.java:374) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchNamespaceQuotas(QuotaUtil.java:341) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$1.fetchEntries(QuotaCache.java:242) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchNamespaceQuotaState(QuotaCache.java:234) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:225) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2724,methodName=Get], waitTime=1311508100ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2724,methodName=Get], waitTime=1311508100ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:04:06,327 INFO [regionserver/jenkins-hbase4:0.Chore.6] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:04:06,392 INFO [regionserver/jenkins-hbase4:0.Chore.5] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:04:36,294 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache(965): totalSize=782.40 MB, usedSize=587.81 KB, freeSize=781.83 MB, max=782.40 MB, blockCount=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=29, evicted=0, evictedPerRun=0.0 2023-11-27 05:04:36,327 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache(965): totalSize=782.40 MB, usedSize=602.02 KB, freeSize=781.81 MB, max=782.40 MB, blockCount=8, accesses=1942, hits=1934, hitRatio=99.59%, , cachingAccesses=1942, cachingHits=1934, cachingHitsRatio=99.59%, evictions=29, evicted=0, evictedPerRun=0.0 2023-11-27 05:04:36,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): data stats (chunk size=2097152): current pool size=5, created chunk count=6, reused chunk count=1, reuseRatio=14.29% 2023-11-27 05:04:36,384 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-MemStoreChunkPool Statistics] regionserver.ChunkCreator$MemStoreChunkPool$StatisticsThread(426): index stats (chunk size=209715): current pool size=0, created chunk count=0, reused chunk count=0, reuseRatio=0 2023-11-27 05:04:37,556 INFO [jenkins-hbase4:41841Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(246): Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2023-11-27 05:04:37,556 INFO [jenkins-hbase4:41853Replication Statistics #0] regionserver.Replication$ReplicationStatisticsTask(246): Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2023-11-27 05:04:42,380 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-11-27 05:04:42,383 DEBUG [master/jenkins-hbase4:0.Chore.1] balancer.BaseLoadBalancer(1718): Start Generate Balance plan for cluster. 2023-11-27 05:04:42,383 DEBUG [master/jenkins-hbase4:0.Chore.1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-11-27 05:04:42,384 DEBUG [master/jenkins-hbase4:0.Chore.1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-11-27 05:04:42,384 DEBUG [master/jenkins-hbase4:0.Chore.1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-11-27 05:04:42,384 DEBUG [master/jenkins-hbase4:0.Chore.1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=7, number of hosts=1, number of racks=1 2023-11-27 05:04:42,387 INFO [master/jenkins-hbase4:0.Chore.1] balancer.StochasticLoadBalancer(389): Cluster wide - Calculating plan. may take up to 30000ms to complete. 2023-11-27 05:04:42,387 INFO [master/jenkins-hbase4:0.Chore.1] balancer.StochasticLoadBalancer(505): Start StochasticLoadBalancer.balancer, initial weighted average imbalance=0.6478405315614617, functionCost=RegionCountSkewCostFunction : (multiplier=500.0, imbalance=0.7499999999999999, need balance); PrimaryRegionCountSkewCostFunction : (not needed); MoveCostFunction : (multiplier=7.0, imbalance=0.0); ServerLocalityCostFunction : (multiplier=25.0, imbalance=0.0); RackLocalityCostFunction : (multiplier=15.0, imbalance=0.0); TableSkewCostFunction : (multiplier=35.0, imbalance=0.0); RegionReplicaHostCostFunction : (not needed); RegionReplicaRackCostFunction : (not needed); ReadRequestCostFunction : (multiplier=5.0, imbalance=1.0, need balance); WriteRequestCostFunction : (multiplier=5.0, imbalance=1.0, need balance); MemStoreSizeCostFunction : (multiplier=5.0, imbalance=0.0); StoreFileCostFunction : (multiplier=5.0, imbalance=1.0, need balance); computedMaxSteps=12800 2023-11-27 05:04:42,565 INFO [master/jenkins-hbase4:0.Chore.1] balancer.StochasticLoadBalancer(553): Finished computing new moving plan. Computation took 4200 ms to try 12800 different iterations. Found a solution that moves 3 regions; Going from a computed imbalance of 0.6478405315614617 to a new imbalance of 0.02034169606593969. funtionCost=RegionCountSkewCostFunction : (multiplier=500.0, imbalance=0.0); PrimaryRegionCountSkewCostFunction : (not needed); MoveCostFunction : (multiplier=7.0, imbalance=0.375, need balance); ServerLocalityCostFunction : (multiplier=25.0, imbalance=0.0); RackLocalityCostFunction : (multiplier=15.0, imbalance=0.0); TableSkewCostFunction : (multiplier=35.0, imbalance=0.0); RegionReplicaHostCostFunction : (not needed); RegionReplicaRackCostFunction : (not needed); ReadRequestCostFunction : (multiplier=5.0, imbalance=0.9241402063391386, need balance); WriteRequestCostFunction : (multiplier=5.0, imbalance=1.0, need balance); MemStoreSizeCostFunction : (multiplier=5.0, imbalance=0.0); StoreFileCostFunction : (multiplier=5.0, imbalance=0.0); 2023-11-27 05:04:42,565 INFO [master/jenkins-hbase4:0.Chore.1] master.HMaster(1846): Balancer plans size is 3, the balance interval is 100000 ms, and the max number regions in transition is 8 2023-11-27 05:04:42,565 INFO [master/jenkins-hbase4:0.Chore.1] master.HMaster(1851): balance hri=1588230740, source=jenkins-hbase4.apache.org,41841,1701061176322, destination=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:42,567 DEBUG [master/jenkins-hbase4:0.Chore.1] procedure2.ProcedureExecutor(1028): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-11-27 05:04:42,568 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-11-27 05:04:42,570 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41841,1701061176322, state=CLOSING 2023-11-27 05:04:42,572 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-11-27 05:04:42,573 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-11-27 05:04:42,573 INFO [PEWorker-3] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=28, ppid=27, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 05:04:42,731 DEBUG [RSProcedureDispatcher-pool-3] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:04:42,731 DEBUG [RSProcedureDispatcher-pool-3] ipc.RpcConnection(122): Using SIMPLE authentication for service=AdminService, sasl=false 2023-11-27 05:04:42,732 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46004, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-11-27 05:04:42,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-11-27 05:04:42,734 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-11-27 05:04:42,734 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-11-27 05:04:42,734 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-11-27 05:04:42,735 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-11-27 05:04:42,735 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-11-27 05:04:42,747 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/recovered.edits/39.seqid, newMaxSeqId=39, maxSeqId=1 2023-11-27 05:04:42,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-11-27 05:04:42,750 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-11-27 05:04:42,750 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-11-27 05:04:42,750 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3515): Adding 1588230740 move to jenkins-hbase4.apache.org,41853,1701061176279 record at close sequenceid=36 2023-11-27 05:04:42,752 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-11-27 05:04:42,752 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-11-27 05:04:42,754 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=28, resume processing ppid=27 2023-11-27 05:04:42,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=28, ppid=27, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41841,1701061176322 in 4.9000 sec 2023-11-27 05:04:42,755 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41853,1701061176279; forceNewPlan=false, retain=false 2023-11-27 05:04:42,906 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 05:04:42,906 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41853,1701061176279, state=OPENING 2023-11-27 05:04:42,908 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-11-27 05:04:42,908 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-11-27 05:04:42,908 INFO [PEWorker-2] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=29, ppid=27, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41853,1701061176279}] 2023-11-27 05:04:43,060 DEBUG [RSProcedureDispatcher-pool-4] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:43,061 DEBUG [RSProcedureDispatcher-pool-4] ipc.RpcConnection(122): Using SIMPLE authentication for service=AdminService, sasl=false 2023-11-27 05:04:43,062 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51078, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-11-27 05:04:43,071 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-11-27 05:04:43,072 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-11-27 05:04:43,074 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41853%2C1701061176279.meta, suffix=.meta, logDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41853,1701061176279, archiveDir=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs, maxLogs=32 2023-11-27 05:04:43,092 DEBUG [RS-EventLoopGroup-4-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK] 2023-11-27 05:04:43,093 DEBUG [RS-EventLoopGroup-4-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK] 2023-11-27 05:04:43,095 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/WALs/jenkins-hbase4.apache.org,41853,1701061176279/jenkins-hbase4.apache.org%2C41853%2C1701061176279.meta.1701061483075.meta 2023-11-27 05:04:43,096 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35723,DS-8e8c4241-d25c-4419-8408-7d31707c4cd1,DISK], DatanodeInfoWithStorage[127.0.0.1:40543,DS-ee024098-c42c-48f3-a34a-47e38fee1b14,DISK]] 2023-11-27 05:04:43,096 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-11-27 05:04:43,097 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-11-27 05:04:43,097 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-11-27 05:04:43,097 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-11-27 05:04:43,097 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-11-27 05:04:43,097 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 05:04:43,098 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-11-27 05:04:43,098 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-11-27 05:04:43,100 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-11-27 05:04:43,102 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info 2023-11-27 05:04:43,102 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info 2023-11-27 05:04:43,103 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-11-27 05:04:43,113 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info/eaf2f890b1614a5e81692d645766ff67 2023-11-27 05:04:43,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 05:04:43,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-11-27 05:04:43,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/rep_barrier 2023-11-27 05:04:43,115 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/rep_barrier 2023-11-27 05:04:43,116 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-11-27 05:04:43,116 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 05:04:43,116 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-11-27 05:04:43,117 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table 2023-11-27 05:04:43,117 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table 2023-11-27 05:04:43,118 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-11-27 05:04:43,126 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table/73e1abe8057642f2a264ca210b93498e 2023-11-27 05:04:43,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 05:04:43,127 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740 2023-11-27 05:04:43,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740 2023-11-27 05:04:43,133 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-11-27 05:04:43,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-11-27 05:04:43,135 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=40; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=74102551, jitterRate=0.10421405732631683}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-11-27 05:04:43,136 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-11-27 05:04:43,137 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for hbase:meta,,1.1588230740, pid=29, masterSystemTime=1703929618228 2023-11-27 05:04:43,138 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for hbase:meta,,1.1588230740 2023-11-27 05:04:43,139 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-11-27 05:04:43,139 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41853,1701061176279, state=OPEN 2023-11-27 05:04:43,141 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-11-27 05:04:43,141 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-11-27 05:04:43,143 INFO [PEWorker-4] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=29, resume processing ppid=27 2023-11-27 05:04:43,143 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=29, ppid=27, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41853,1701061176279 in 5.7000 sec 2023-11-27 05:04:43,144 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 14.7000 sec 2023-11-27 05:04:43,170 INFO [master/jenkins-hbase4:0.Chore.1] master.HMaster(1851): balance hri=708094d2c6013f8353947ca009f33ef1, source=jenkins-hbase4.apache.org,41841,1701061176322, destination=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:43,171 DEBUG [master/jenkins-hbase4:0.Chore.1] procedure2.ProcedureExecutor(1028): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, REOPEN/MOVE 2023-11-27 05:04:43,172 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, REOPEN/MOVE 2023-11-27 05:04:43,173 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=708094d2c6013f8353947ca009f33ef1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:04:43,173 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1703929619928"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1703929619928"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1703929619928"}]},"ts":"1703929619928"} 2023-11-27 05:04:43,174 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41841] ipc.CallRunner(144): callId: 77 service: ClientService methodName: Mutate size: 279 connection: 172.31.14.131:50776 deadline: 1703929679928, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41853 startCode=1701061176279. As of locationSeqNum=36. 2023-11-27 05:04:43,211 WARN [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33323] assignment.AssignmentManager(1312): Unable to acquire lock for regionNode state=CLOSING, location=jenkins-hbase4.apache.org,41841,1701061176322, table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1. It is likely that another thread is currently holding the lock. To avoid deadlock, skip execution for now. 2023-11-27 05:04:43,312 WARN [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33323] assignment.AssignmentManager(1312): Unable to acquire lock for regionNode state=CLOSING, location=jenkins-hbase4.apache.org,41841,1701061176322, table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1. It is likely that another thread is currently holding the lock. To avoid deadlock, skip execution for now. 2023-11-27 05:04:43,413 WARN [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33323] assignment.AssignmentManager(1312): Unable to acquire lock for regionNode state=CLOSING, location=jenkins-hbase4.apache.org,41841,1701061176322, table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1. It is likely that another thread is currently holding the lock. To avoid deadlock, skip execution for now. 2023-11-27 05:04:43,427 DEBUG [PEWorker-5] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 05:04:43,430 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51088, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 05:04:43,433 INFO [PEWorker-5] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=31, ppid=30, state=RUNNABLE; CloseRegionProcedure 708094d2c6013f8353947ca009f33ef1, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 05:04:43,484 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-11-27 05:04:43,586 DEBUG [RSProcedureDispatcher-pool-5] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:04:43,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 708094d2c6013f8353947ca009f33ef1, disabling compactions & flushes 2023-11-27 05:04:43,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:04:43,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:04:43,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. after waiting 0 ms 2023-11-27 05:04:43,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:04:43,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-11-27 05:04:43,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:04:43,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 708094d2c6013f8353947ca009f33ef1: 2023-11-27 05:04:43,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3515): Adding 708094d2c6013f8353947ca009f33ef1 move to jenkins-hbase4.apache.org,41853,1701061176279 record at close sequenceid=7 2023-11-27 05:04:43,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,597 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=708094d2c6013f8353947ca009f33ef1, regionState=CLOSED 2023-11-27 05:04:43,597 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1703929630028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1703929630028"}]},"ts":"1703929630028"} 2023-11-27 05:04:43,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=31, resume processing ppid=30 2023-11-27 05:04:43,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=31, ppid=30, state=SUCCESS; CloseRegionProcedure 708094d2c6013f8353947ca009f33ef1, server=jenkins-hbase4.apache.org,41841,1701061176322 in 3.8000 sec 2023-11-27 05:04:43,601 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41853,1701061176279; forceNewPlan=false, retain=false 2023-11-27 05:04:43,751 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 05:04:43,752 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=708094d2c6013f8353947ca009f33ef1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:43,752 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1703929634728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1703929634728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1703929634728"}]},"ts":"1703929634728"} 2023-11-27 05:04:43,754 INFO [PEWorker-1] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=32, ppid=30, state=RUNNABLE; OpenRegionProcedure 708094d2c6013f8353947ca009f33ef1, server=jenkins-hbase4.apache.org,41853,1701061176279}] 2023-11-27 05:04:43,906 DEBUG [RSProcedureDispatcher-pool-3] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:43,911 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:04:43,911 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 708094d2c6013f8353947ca009f33ef1, NAME => 'hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.', STARTKEY => '', ENDKEY => ''} 2023-11-27 05:04:43,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 05:04:43,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,912 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,914 INFO [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,915 DEBUG [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info 2023-11-27 05:04:43,915 DEBUG [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info 2023-11-27 05:04:43,916 INFO [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 708094d2c6013f8353947ca009f33ef1 columnFamilyName info 2023-11-27 05:04:43,923 DEBUG [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] regionserver.HStore(539): loaded hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info/0cbf51f8ba3a48c3abfb51cb1c9c0a1c 2023-11-27 05:04:43,923 INFO [StoreOpener-708094d2c6013f8353947ca009f33ef1-1] regionserver.HStore(310): Store=708094d2c6013f8353947ca009f33ef1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 05:04:43,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,927 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,931 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:04:43,932 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 708094d2c6013f8353947ca009f33ef1; next sequenceid=11; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=61082008, jitterRate=-0.08980715274810791}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 05:04:43,932 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 708094d2c6013f8353947ca009f33ef1: 2023-11-27 05:04:43,933 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1., pid=32, masterSystemTime=1703929639328 2023-11-27 05:04:43,935 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:04:43,936 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:04:43,936 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=708094d2c6013f8353947ca009f33ef1, regionState=OPEN, openSeqNum=11, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:43,936 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1703929639428"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1703929639428"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1703929639428"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1703929639428"}]},"ts":"1703929639428"} 2023-11-27 05:04:43,941 INFO [PEWorker-2] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=32, resume processing ppid=30 2023-11-27 05:04:43,941 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=32, ppid=30, state=SUCCESS; OpenRegionProcedure 708094d2c6013f8353947ca009f33ef1, server=jenkins-hbase4.apache.org,41853,1701061176279 in 4.7000 sec 2023-11-27 05:04:43,943 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=708094d2c6013f8353947ca009f33ef1, REOPEN/MOVE in 19.6000 sec 2023-11-27 05:04:43,972 INFO [master/jenkins-hbase4:0.Chore.1] master.HMaster(1851): balance hri=2d48041eaba6bc404a22a735fb3000dd, source=jenkins-hbase4.apache.org,41841,1701061176322, destination=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:43,974 DEBUG [master/jenkins-hbase4:0.Chore.1] procedure2.ProcedureExecutor(1028): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, REOPEN/MOVE 2023-11-27 05:04:43,974 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, REOPEN/MOVE 2023-11-27 05:04:43,975 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:04:43,975 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1703929639628"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1703929639628"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1703929639628"}]},"ts":"1703929639628"} 2023-11-27 05:04:43,977 INFO [PEWorker-4] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=34, ppid=33, state=RUNNABLE; CloseRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 05:04:44,130 DEBUG [RSProcedureDispatcher-pool-4] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:04:44,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2d48041eaba6bc404a22a735fb3000dd, disabling compactions & flushes 2023-11-27 05:04:44,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:04:44,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:04:44,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. after waiting 0 ms 2023-11-27 05:04:44,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:04:44,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-11-27 05:04:44,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:04:44,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2d48041eaba6bc404a22a735fb3000dd: 2023-11-27 05:04:44,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3515): Adding 2d48041eaba6bc404a22a735fb3000dd move to jenkins-hbase4.apache.org,41853,1701061176279 record at close sequenceid=2 2023-11-27 05:04:44,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,140 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=CLOSED 2023-11-27 05:04:44,141 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1703929643328"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1703929643328"}]},"ts":"1703929643328"} 2023-11-27 05:04:44,144 INFO [PEWorker-5] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=34, resume processing ppid=33 2023-11-27 05:04:44,144 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=34, ppid=33, state=SUCCESS; CloseRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41841,1701061176322 in 3.5000 sec 2023-11-27 05:04:44,145 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41853,1701061176279; forceNewPlan=false, retain=false 2023-11-27 05:04:44,295 INFO [jenkins-hbase4:33323] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-11-27 05:04:44,295 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:44,296 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1703929648228"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1703929648228"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1703929648228"}]},"ts":"1703929648228"} 2023-11-27 05:04:44,297 INFO [PEWorker-2] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=35, ppid=33, state=RUNNABLE; OpenRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41853,1701061176279}] 2023-11-27 05:04:44,449 DEBUG [RSProcedureDispatcher-pool-5] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:44,453 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:04:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d48041eaba6bc404a22a735fb3000dd, NAME => 'TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.', STARTKEY => '', ENDKEY => ''} 2023-11-27 05:04:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestQuotaAdmin1 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-11-27 05:04:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,455 INFO [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,457 DEBUG [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/cf 2023-11-27 05:04:44,457 DEBUG [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/cf 2023-11-27 05:04:44,457 INFO [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:10, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d48041eaba6bc404a22a735fb3000dd columnFamilyName cf 2023-11-27 05:04:44,458 INFO [StoreOpener-2d48041eaba6bc404a22a735fb3000dd-1] regionserver.HStore(310): Store=2d48041eaba6bc404a22a735fb3000dd/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-11-27 05:04:44,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:04:44,465 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2d48041eaba6bc404a22a735fb3000dd; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=61268871, jitterRate=-0.08702267706394196}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-11-27 05:04:44,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2d48041eaba6bc404a22a735fb3000dd: 2023-11-27 05:04:44,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2339): Post open deploy tasks for TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd., pid=35, masterSystemTime=1703929652928 2023-11-27 05:04:44,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2366): Finished post open deploy task for TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:04:44,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:04:44,468 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:04:44,469 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1703929653128"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1703929653128"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1703929653128"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1703929653128"}]},"ts":"1703929653128"} 2023-11-27 05:04:44,472 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=35, resume processing ppid=33 2023-11-27 05:04:44,472 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=35, ppid=33, state=SUCCESS; OpenRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41853,1701061176279 in 4.3000 sec 2023-11-27 05:04:44,474 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, REOPEN/MOVE in 13.5000 sec 2023-11-27 05:04:44,474 DEBUG [master/jenkins-hbase4:0.Chore.1] master.HMaster(1882): Balancer is going into sleep until next period in 300000ms 2023-11-27 05:04:44,484 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(200): Skipping table TestQuotaAdmin1 because normalization is disabled in its table properties and normalization is also disabled at table level by default 2023-11-27 05:04:44,484 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(200): Skipping table TestQuotaAdmin0 because normalization is disabled in its table properties and normalization is also disabled at table level by default 2023-11-27 05:04:44,484 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(200): Skipping table TestNs:TestTable because normalization is disabled in its table properties and normalization is also disabled at table level by default 2023-11-27 05:04:44,484 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(200): Skipping table TestQuotaAdmin2 because normalization is disabled in its table properties and normalization is also disabled at table level by default 2023-11-27 05:04:48,156 INFO [regionserver/jenkins-hbase4:0.Chore.5] hbase.ScheduledChore(142): Chore: RegionSizeReportingChore missed its start time 2023-11-27 05:04:48,600 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-11-27 05:05:22,237 INFO [regionserver/jenkins-hbase4:0.Chore.5] hbase.ScheduledChore(142): Chore: SpaceQuotaRefresherChore missed its start time 2023-11-27 05:05:22,592 INFO [regionserver/jenkins-hbase4:0.Chore.7] hbase.ScheduledChore(142): Chore: CompactionChecker missed its start time 2023-11-27 05:05:22,598 INFO [regionserver/jenkins-hbase4:0.Chore.4] hbase.ScheduledChore(142): Chore: MemstoreFlusherChore missed its start time 2023-11-27 05:05:22,646 INFO [regionserver/jenkins-hbase4:0.Chore.6] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because 1588230740/info has an old edit so flush to free WALs after random delay 43748 ms 2023-11-27 05:05:22,647 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] ipc.RpcServer(528): (responseTooSlow): {"call":"GetClusterStatus(org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest)","starttimems":"1703930694228","responsesize":"339","method":"GetClusterStatus","param":"TODO: class org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$GetClusterStatusRequest","processingtimems":13200,"client":"172.31.14.131:51290","queuetimems":0,"class":"HMaster"} 2023-11-27 05:05:22,653 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1703930824028","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=n.default","processingtimems":12300,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":2100,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:05:22,656 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1703930901828","responsesize":"72","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":16500,"client":"172.31.14.131:53770","queuetimems":1500,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:05:22,656 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Multi(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$MultiRequest)","multi.gets":2,"starttimems":"1703930908428","responsesize":"20","method":"Multi","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., for 2 action(s) and 1st row key=t.TestNs:TestTable","processingtimems":11300,"client":"172.31.14.131:53770","multi.service_calls":0,"queuetimems":700,"class":"MiniHBaseClusterRegionServer","multi.mutations":0} 2023-11-27 05:05:22,657 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:05:22,658 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703930715928 bypass), TestNs=QuotaState(ts=1703930715928 bypass)} 2023-11-27 05:05:22,658 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestNs:TestTable=QuotaState(ts=1703930869328 bypass), TestQuotaAdmin1=QuotaState(ts=1703930869328 bypass)} 2023-11-27 05:05:22,659 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703930901228 [ default ])} 2023-11-27 05:05:22,661 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703930946228 bypass)} 2023-11-27 05:05:22,663 DEBUG [regionserver/jenkins-hbase4:0.Chore.2] ipc.RpcConnection(122): Using SIMPLE authentication for service=MasterService, sasl=false 2023-11-27 05:05:22,666 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56176, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=MasterService 2023-11-27 05:05:22,738 DEBUG [MemStoreFlusher.0] regionserver.FlushAllLargeStoresPolicy(69): Since none of the CFs were above the size, flushing all. 2023-11-27 05:05:22,738 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.10 KB heapSize=5.77 KB 2023-11-27 05:05:22,762 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.10 KB at sequenceid=50 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/info/fbf81f29025a47cda7a65b4284afe0ad 2023-11-27 05:05:22,771 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/info/fbf81f29025a47cda7a65b4284afe0ad as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info/fbf81f29025a47cda7a65b4284afe0ad 2023-11-27 05:05:22,777 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info/fbf81f29025a47cda7a65b4284afe0ad, entries=22, sequenceid=50, filesize=7.3 K 2023-11-27 05:05:22,779 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~3.10 KB/3172, heapSize ~5.25 KB/5376, currentSize=0 B/0 for 1588230740 in 0ms, sequenceid=50, compaction requested=false 2023-11-27 05:05:22,779 DEBUG [MemStoreFlusher.0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-11-27 05:05:22,780 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 1588230740: 2023-11-27 05:05:22,918 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:05:22,919 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703930970328 bypass), TestNs=QuotaState(ts=1703930970328 bypass)} 2023-11-27 05:05:22,920 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703930986428 bypass), TestNs:TestTable=QuotaState(ts=1703930986428 bypass), TestQuotaAdmin2=QuotaState(ts=1703930986428 bypass)} 2023-11-27 05:05:22,921 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703930999028 [ default ])} 2023-11-27 05:05:22,921 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703931012428 bypass)} 2023-11-27 05:05:22,921 DEBUG [Listener at localhost/34689] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 05:05:22,924 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47240, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 05:05:22,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:22,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 135 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:23,179 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(144): callId: 136 service: ClientService methodName: Get size: 91 connection: 172.31.14.131:47240 deadline: 1703931072428, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41853 startCode=1701061176279. As of locationSeqNum=36. 2023-11-27 05:05:23,430 DEBUG [Listener at localhost/34689] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 05:05:23,434 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 05:05:23,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:23,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 138 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:23,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:23,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 140 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:24,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:24,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 142 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:25,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:25,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 144 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:28,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:28,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 146 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:33,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:33,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 148 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms 2023-11-27 05:05:33,479 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-5' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:199) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:05:33,479 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(100): get failed after nRetries=5 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10001: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-5' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:199) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:05:33,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:33,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 155 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:33,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:33,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 157 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:34,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:34,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 159 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:35,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:35,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 161 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:36,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:36,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 163 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:38,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:38,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 165 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:38,863 DEBUG [master/jenkins-hbase4:0.Chore.1] zookeeper.ReadOnlyZKClient(139): Connect 0x25947bba to 127.0.0.1:50029 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-11-27 05:05:38,869 DEBUG [master/jenkins-hbase4:0.Chore.1] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51a58eca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=null 2023-11-27 05:05:38,871 DEBUG [hconnection-0x75791de3-metaLookup-shared--pool-0] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 05:05:38,872 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45624, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 05:05:38,884 DEBUG [hconnection-0x75791de3-shared-pool-0] ipc.RpcConnection(122): Using SIMPLE authentication for service=ClientService, sasl=false 2023-11-27 05:05:38,885 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38924, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 05:05:43,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:43,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 167 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703931022428, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:43,808 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-6' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:512) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:71) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:54) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:05:43,809 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(76): put failed after nRetries=6 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10095: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-6' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:512) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:71) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:54) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:05:43,809 DEBUG [Listener at localhost/34689] ipc.RpcConnection(122): Using SIMPLE authentication for service=MasterService, sasl=false 2023-11-27 05:05:43,812 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58902, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-11-27 05:05:44,036 INFO [regionserver/jenkins-hbase4:0.Chore.3] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. because be5ef4f3dfb2c43b447798061e19f02f/q has an old edit so flush to free WALs after random delay 288955 ms 2023-11-27 05:05:44,066 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:05:44,066 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703934612428 bypass), TestNs=QuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,066 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=0 bypass), TestNs:TestTable=QuotaState(ts=1703934612428 bypass), TestQuotaAdmin1=QuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,066 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,066 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,317 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:05:44,317 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703934612428 bypass), TestNs=QuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,317 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703934612428 bypass), TestNs:TestTable=QuotaState(ts=1703934612428 bypass), TestQuotaAdmin2=QuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,317 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,317 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703934612428 bypass)} 2023-11-27 05:05:44,330 INFO [Listener at localhost/34689] hbase.ResourceChecker(175): after: quotas.TestClusterScopeQuotaThrottle#testUserNamespaceClusterScopeQuota Thread=294 (was 293) Potentially hanging thread: hconnection-0x63a58a6e-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a76e899-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50029@0x25947bba-SendThread(127.0.0.1:50029) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:332) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289) Potentially hanging thread: RPCClient-NioEventLoopGroup-5-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2137806402_17 at /127.0.0.1:59338 [Receiving block BP-1577092985-172.31.14.131-1701061172104:blk_1073741854_1030] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0-prefix:jenkins-hbase4.apache.org,41853,1701061176279.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1577092985-172.31.14.131-1701061172104:blk_1073741854_1030, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63a58a6e-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1577092985-172.31.14.131-1701061172104:blk_1073741854_1030, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63a58a6e-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-5-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2137806402_17 at /127.0.0.1:34610 [Receiving block BP-1577092985-172.31.14.131-1701061172104:blk_1073741854_1030] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x63a58a6e-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a76e899-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50029@0x25947bba-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:549) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-5-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.2 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.ipc.BlockingRpcCallback.get(BlockingRpcCallback.java:60) org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:328) org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:87) org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:575) org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java) org.apache.hadoop.hbase.client.ConnectionImplementation$3.getClusterStatus(ConnectionImplementation.java:1766) org.apache.hadoop.hbase.client.HBaseAdmin$47.rpcCall(HBaseAdmin.java:2060) org.apache.hadoop.hbase.client.HBaseAdmin$47.rpcCall(HBaseAdmin.java:2055) org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:99) org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:2954) org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:2946) org.apache.hadoop.hbase.client.HBaseAdmin.getClusterMetrics(HBaseAdmin.java:2054) org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.updateQuotaFactors(QuotaCache.java:359) org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:224) org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: regionserver/jenkins-hbase4:0.Chore.3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50029@0x25947bba sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$88/2038390170.run(Unknown Source) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=614 (was 623), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=158 (was 122) - SystemLoadAverage LEAK? -, ProcessCount=167 (was 171), AvailableMemoryMB=7294 (was 7899) 2023-11-27 05:05:44,342 INFO [Listener at localhost/34689] hbase.ResourceChecker(147): before: quotas.TestClusterScopeQuotaThrottle#testUserClusterScopeQuota Thread=295, OpenFileDescriptor=614, MaxFileDescriptor=60000, SystemLoadAverage=158, ProcessCount=167, AvailableMemoryMB=7293 2023-11-27 05:05:44,352 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: q of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1703931012428 is > 3600000 from now =1703938212428 2023-11-27 05:05:44,353 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing be5ef4f3dfb2c43b447798061e19f02f 1/2 column families, dataSize=138 B heapSize=872 B; q={dataSize=138 B, heapSize=616 B, offHeapSize=0 B} 2023-11-27 05:05:44,369 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=138 B at sequenceid=17 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/4f36dab3a5fa4a1ab903980c17dab4ab 2023-11-27 05:05:44,376 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/4f36dab3a5fa4a1ab903980c17dab4ab as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/4f36dab3a5fa4a1ab903980c17dab4ab 2023-11-27 05:05:44,382 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/4f36dab3a5fa4a1ab903980c17dab4ab, entries=2, sequenceid=17, filesize=4.8 K 2023-11-27 05:05:44,383 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~138 B/138, heapSize ~600 B/600, currentSize=0 B/0 for be5ef4f3dfb2c43b447798061e19f02f in 0ms, sequenceid=17, compaction requested=false 2023-11-27 05:05:44,383 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 05:05:44,627 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1703938581228","responsesize":"63","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":136600,"client":"172.31.14.131:53770","queuetimems":1900,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:05:44,627 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.RpcServer(528): (responseTooSlow): {"call":"Get(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$GetRequest)","starttimems":"1703938587928","responsesize":"63","method":"Get","param":"region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins","processingtimems":133700,"client":"172.31.14.131:53770","queuetimems":7300,"class":"MiniHBaseClusterRegionServer"} 2023-11-27 05:05:44,627 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2885 service: ClientService methodName: Get size: 226 connection: 172.31.14.131:53770 deadline: 1703938661828 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins connection: 172.31.14.131:53770 2023-11-27 05:05:44,627 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2884 service: ClientService methodName: Get size: 226 connection: 172.31.14.131:53770 deadline: 1703938640828 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins connection: 172.31.14.131:53770 2023-11-27 05:05:44,627 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2886 service: ClientService methodName: Get size: 226 connection: 172.31.14.131:53770 deadline: 1703938662228 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins connection: 172.31.14.131:53770 2023-11-27 05:05:44,627 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2887 service: ClientService methodName: Get size: 226 connection: 172.31.14.131:53770 deadline: 1703938662428 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins connection: 172.31.14.131:53770 2023-11-27 05:05:44,627 WARN [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2888 service: ClientService methodName: Get size: 226 connection: 172.31.14.131:53770 deadline: 1703938676828 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins connection: 172.31.14.131:53770 2023-11-27 05:05:44,628 WARN [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41841] ipc.CallRunner(105): Dropping timed out call: callId: 2889 service: ClientService methodName: Get size: 226 connection: 172.31.14.131:53770 deadline: 1703938677028 param: region= hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., row=u.jenkins connection: 172.31.14.131:53770 2023-11-27 05:05:44,628 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:05:44,629 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703938507328 bypass), TestNs=QuotaState(ts=1703938507328 bypass)} 2023-11-27 05:05:44,629 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703938557528 bypass), TestNs:TestTable=QuotaState(ts=1703938557528 bypass), TestQuotaAdmin2=QuotaState(ts=1703938557528 bypass), TestQuotaAdmin1=QuotaState(ts=1703938557528 bypass)} 2023-11-27 05:05:44,629 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703938569928 global-limiter)} 2023-11-27 05:05:44,629 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703938212428 bypass)} 2023-11-27 05:05:44,888 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:05:44,888 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703938765728 bypass), TestNs=QuotaState(ts=1703938765728 bypass)} 2023-11-27 05:05:44,888 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703938785128 bypass), TestNs:TestTable=QuotaState(ts=1703938785128 bypass), TestQuotaAdmin2=QuotaState(ts=1703938785128 bypass), TestQuotaAdmin1=QuotaState(ts=1703938785128 bypass)} 2023-11-27 05:05:44,888 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703938794628 global-limiter)} 2023-11-27 05:05:44,888 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703938802828 bypass)} 2023-11-27 05:05:44,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:44,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 180 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:45,037 INFO [regionserver/jenkins-hbase4:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. because 44ac23936652c71f70e8746cf757ab6d/cf has an old edit so flush to free WALs after random delay 174395 ms 2023-11-27 05:05:45,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:45,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 182 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:45,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:45,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 184 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:46,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:46,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 186 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:47,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:47,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 188 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:50,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:50,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 190 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:55,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:55,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 192 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms 2023-11-27 05:05:55,193 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-6' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:512) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:71) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:54) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserClusterScopeQuota(TestClusterScopeQuotaThrottle.java:178) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:05:55,193 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(76): put failed after nRetries=6 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10094: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-6' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:512) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:71) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:54) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserClusterScopeQuota(TestClusterScopeQuotaThrottle.java:178) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:05:55,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:55,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 196 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:55,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:55,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 198 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:55,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:55,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 200 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:56,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:56,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 202 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:57,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:05:57,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 204 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:06:00,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:06:00,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 206 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:06:05,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:06:05,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 208 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703938812828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms 2023-11-27 05:06:05,524 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-3' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserClusterScopeQuota(TestClusterScopeQuotaThrottle.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:05,524 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(100): get failed after nRetries=3 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10075: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-3' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testUserClusterScopeQuota(TestClusterScopeQuotaThrottle.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 20sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:05,600 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 44ac23936652c71f70e8746cf757ab6d 1/1 column families, dataSize=408 B heapSize=1.56 KB 2023-11-27 05:06:05,618 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=408 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/.tmp/cf/0dfcfdc2ec834268aae732a3ba7f025a 2023-11-27 05:06:05,627 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/.tmp/cf/0dfcfdc2ec834268aae732a3ba7f025a as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/0dfcfdc2ec834268aae732a3ba7f025a 2023-11-27 05:06:05,633 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/0dfcfdc2ec834268aae732a3ba7f025a, entries=6, sequenceid=16, filesize=4.7 K 2023-11-27 05:06:05,634 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~408 B/408, heapSize ~1.55 KB/1584, currentSize=0 B/0 for 44ac23936652c71f70e8746cf757ab6d in 0ms, sequenceid=16, compaction requested=false 2023-11-27 05:06:05,634 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 44ac23936652c71f70e8746cf757ab6d: 2023-11-27 05:06:05,778 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:05,778 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703942402828 bypass), TestNs=QuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:05,778 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703942402828 bypass), TestNs:TestTable=QuotaState(ts=1703942402828 bypass), TestQuotaAdmin2=QuotaState(ts=1703942402828 bypass), TestQuotaAdmin1=QuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:05,778 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:05,778 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:06,028 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:06,028 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703942402828 bypass), TestNs=QuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:06,028 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703942402828 bypass), TestNs:TestTable=QuotaState(ts=1703942402828 bypass), TestQuotaAdmin2=QuotaState(ts=1703942402828 bypass), TestQuotaAdmin1=QuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:06,028 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:06,028 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703942402828 bypass)} 2023-11-27 05:06:06,037 INFO [regionserver/jenkins-hbase4:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. because be5ef4f3dfb2c43b447798061e19f02f/q has an old edit so flush to free WALs after random delay 190838 ms 2023-11-27 05:06:06,040 INFO [Listener at localhost/34689] hbase.ResourceChecker(175): after: quotas.TestClusterScopeQuotaThrottle#testUserClusterScopeQuota Thread=294 (was 295), OpenFileDescriptor=621 (was 614) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=112 (was 158), ProcessCount=166 (was 167), AvailableMemoryMB=7293 (was 7293) 2023-11-27 05:06:06,051 INFO [Listener at localhost/34689] hbase.ResourceChecker(147): before: quotas.TestClusterScopeQuotaThrottle#testTableClusterScopeQuota Thread=294, OpenFileDescriptor=621, MaxFileDescriptor=60000, SystemLoadAverage=112, ProcessCount=166, AvailableMemoryMB=7293 2023-11-27 05:06:06,134 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: q of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1703938802828 is > 3600000 from now =1703946002828 2023-11-27 05:06:06,135 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: u of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1703938802828 is > 3600000 from now =1703946002828 2023-11-27 05:06:06,135 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing be5ef4f3dfb2c43b447798061e19f02f 2/2 column families, dataSize=114 B heapSize=848 B 2023-11-27 05:06:06,152 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=84 B at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/9fdd0030e85d4346a4f4cd419940ba16 2023-11-27 05:06:06,158 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9fdd0030e85d4346a4f4cd419940ba16 2023-11-27 05:06:06,172 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=30 B at sequenceid=22 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/u/e8a0817df518412687eb6645218de343 2023-11-27 05:06:06,178 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8a0817df518412687eb6645218de343 2023-11-27 05:06:06,179 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/9fdd0030e85d4346a4f4cd419940ba16 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/9fdd0030e85d4346a4f4cd419940ba16 2023-11-27 05:06:06,184 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9fdd0030e85d4346a4f4cd419940ba16 2023-11-27 05:06:06,184 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/9fdd0030e85d4346a4f4cd419940ba16, entries=2, sequenceid=22, filesize=5.0 K 2023-11-27 05:06:06,185 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/u/e8a0817df518412687eb6645218de343 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u/e8a0817df518412687eb6645218de343 2023-11-27 05:06:06,190 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8a0817df518412687eb6645218de343 2023-11-27 05:06:06,191 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u/e8a0817df518412687eb6645218de343, entries=1, sequenceid=22, filesize=4.9 K 2023-11-27 05:06:06,191 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~114 B/114, heapSize ~816 B/816, currentSize=0 B/0 for be5ef4f3dfb2c43b447798061e19f02f in 0ms, sequenceid=22, compaction requested=false 2023-11-27 05:06:06,191 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 05:06:06,306 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:06,307 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703946002828 bypass), TestNs=QuotaState(ts=1703946002828 bypass)} 2023-11-27 05:06:06,308 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestNs:TestTable=QuotaState(ts=1703946002828 TimeBasedLimiter( readReqs=AverageIntervalRateLimiter(avail=10 limit=10 tunit=3600000))), TestQuotaAdmin1=QuotaState(ts=1703946002828 bypass)} 2023-11-27 05:06:06,308 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {} 2023-11-27 05:06:06,308 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703946002828 bypass)} 2023-11-27 05:06:06,559 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:06,560 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703946002828 bypass), TestNs=QuotaState(ts=1703946002828 bypass)} 2023-11-27 05:06:06,561 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703946002828 bypass), TestNs:TestTable=QuotaState(ts=1703946002828 TimeBasedLimiter( readReqs=AverageIntervalRateLimiter(avail=10 limit=10 tunit=3600000))), TestQuotaAdmin2=QuotaState(ts=1703946002828 bypass)} 2023-11-27 05:06:06,561 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {} 2023-11-27 05:06:06,562 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703946002828 bypass)} 2023-11-27 05:06:06,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:06,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 223 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:06,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:06,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 225 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:07,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:07,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 227 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:08,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:08,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 229 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:09,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:09,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 231 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:11,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:11,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 233 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:16,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:16,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 235 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:16,907 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-10' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:151) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:16,907 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(100): get failed after nRetries=10 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10062: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-10' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:151) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:16,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:16,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 236 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:17,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:17,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 238 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:17,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:17,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 240 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:18,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:18,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 242 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:19,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:19,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 244 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:22,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:22,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 246 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:27,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestNs:TestTable numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:27,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41853] ipc.CallRunner(144): callId: 248 service: ClientService methodName: Get size: 116 connection: 172.31.14.131:36452 deadline: 1703946012828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms 2023-11-27 05:06:27,263 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-0' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:151) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:27,263 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(100): get failed after nRetries=0 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10073: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-0' on table 'TestNs:TestTable' at region=TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17., hostname=jenkins-hbase4.apache.org,41853,1701061176279, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testTableClusterScopeQuota(TestClusterScopeQuotaThrottle.java:151) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 6mins, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:27,517 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:27,517 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703949602828 bypass), TestNs=QuotaState(ts=1703949602828 bypass)} 2023-11-27 05:06:27,517 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestNs:TestTable=QuotaState(ts=1703949602828 bypass), TestQuotaAdmin1=QuotaState(ts=1703949602828 bypass)} 2023-11-27 05:06:27,517 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703949602828 bypass)} 2023-11-27 05:06:27,517 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703949602828 bypass)} 2023-11-27 05:06:27,767 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:27,767 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703949602828 bypass), TestNs=QuotaState(ts=1703949602828 bypass)} 2023-11-27 05:06:27,768 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703949602828 bypass), TestNs:TestTable=QuotaState(ts=1703949602828 bypass), TestQuotaAdmin2=QuotaState(ts=1703949602828 bypass)} 2023-11-27 05:06:27,768 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {} 2023-11-27 05:06:27,768 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703949602828 bypass)} 2023-11-27 05:06:27,779 INFO [Listener at localhost/34689] hbase.ResourceChecker(175): after: quotas.TestClusterScopeQuotaThrottle#testTableClusterScopeQuota Thread=291 (was 294), OpenFileDescriptor=609 (was 621), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=81 (was 112), ProcessCount=166 (was 166), AvailableMemoryMB=7294 (was 7293) - AvailableMemoryMB LEAK? - 2023-11-27 05:06:27,791 INFO [Listener at localhost/34689] hbase.ResourceChecker(147): before: quotas.TestClusterScopeQuotaThrottle#testNamespaceClusterScopeQuota Thread=291, OpenFileDescriptor=609, MaxFileDescriptor=60000, SystemLoadAverage=81, ProcessCount=166, AvailableMemoryMB=7294 2023-11-27 05:06:28,048 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:28,050 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703953202828 TimeBasedLimiter( writeReqs=AverageIntervalRateLimiter(avail=5 limit=5 tunit=60000) readReqs=AverageIntervalRateLimiter(avail=6 limit=6 tunit=60000))), TestNs=QuotaState(ts=1703953202828 bypass)} 2023-11-27 05:06:28,051 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=0 bypass), TestNs:TestTable=QuotaState(ts=1703953202828 bypass), TestQuotaAdmin1=QuotaState(ts=1703953202828 bypass)} 2023-11-27 05:06:28,051 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {} 2023-11-27 05:06:28,051 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703953202828 bypass)} 2023-11-27 05:06:28,062 INFO [regionserver/jenkins-hbase4:0.Chore.2] regionserver.HRegionServer$PeriodicMemStoreFlusher(1921): MemstoreFlusherChore requesting flush of hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. because be5ef4f3dfb2c43b447798061e19f02f/q has an old edit so flush to free WALs after random delay 225666 ms 2023-11-27 05:06:28,302 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:28,304 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703953202828 TimeBasedLimiter( writeReqs=AverageIntervalRateLimiter(avail=5 limit=5 tunit=60000) readReqs=AverageIntervalRateLimiter(avail=6 limit=6 tunit=60000))), TestNs=QuotaState(ts=1703953202828 bypass)} 2023-11-27 05:06:28,304 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703953202828 bypass), TestNs:TestTable=QuotaState(ts=1703953202828 bypass), TestQuotaAdmin2=QuotaState(ts=1703953202828 bypass)} 2023-11-27 05:06:28,304 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {} 2023-11-27 05:06:28,304 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703953202828 bypass)} 2023-11-27 05:06:28,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:28,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 260 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:28,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:28,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 262 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:29,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:29,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 264 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:29,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:29,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 266 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:31,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:31,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 268 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:33,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:33,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 270 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:38,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=1 numReads=0 numScans=0: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:38,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 272 service: ClientService methodName: Mutate size: 142 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms 2023-11-27 05:06:38,616 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-5' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:512) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:71) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:54) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:128) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:38,616 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(76): put failed after nRetries=5 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10057: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-5' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:512) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:71) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doPuts(ThrottleQuotaTestUtil.java:54) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:128) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of write requests exceeded - wait 12sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumWriteRequestsExceeded(RpcThrottlingException.java:104) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:158) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:170) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2942) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:38,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:38,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 279 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:38,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:38,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 281 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:39,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:39,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 283 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:40,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:40,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 285 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:41,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:41,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 287 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:43,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:43,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 289 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:44,630 WARN [regionserver/jenkins-hbase4:0.Chore.8] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read user from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=14626350: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2884,methodName=Get], waitTime=14625700ms, rpcTimeout=60000ms row 'u.jenkins' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchUserQuotas(QuotaUtil.java:278) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$3.fetchEntries(QuotaCache.java:274) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchUserQuotaState(QuotaCache.java:266) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:227) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2884,methodName=Get], waitTime=14625700ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2884,methodName=Get], waitTime=14625700ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:06:44,630 WARN [regionserver/jenkins-hbase4:0.Chore.6] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read user from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=14591250: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2889,methodName=Get], waitTime=14590100ms, rpcTimeout=60000ms row 'u.jenkins' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchUserQuotas(QuotaUtil.java:278) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$3.fetchEntries(QuotaCache.java:274) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchUserQuotaState(QuotaCache.java:266) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:227) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2889,methodName=Get], waitTime=14590100ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2889,methodName=Get], waitTime=14590100ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:06:44,630 WARN [regionserver/jenkins-hbase4:0.Chore.1] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read user from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=14621151: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2886,methodName=Get], waitTime=14620100ms, rpcTimeout=60000ms row 'u.jenkins' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchUserQuotas(QuotaUtil.java:278) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$3.fetchEntries(QuotaCache.java:274) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchUserQuotaState(QuotaCache.java:266) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:227) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2886,methodName=Get], waitTime=14620100ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2886,methodName=Get], waitTime=14620100ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:06:44,630 WARN [regionserver/jenkins-hbase4:0.Chore.2] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read user from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=14593650: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2888,methodName=Get], waitTime=14592300ms, rpcTimeout=60000ms row 'u.jenkins' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchUserQuotas(QuotaUtil.java:278) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$3.fetchEntries(QuotaCache.java:274) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchUserQuotaState(QuotaCache.java:266) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:227) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2888,methodName=Get], waitTime=14592300ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2888,methodName=Get], waitTime=14592300ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:06:44,630 WARN [regionserver/jenkins-hbase4:0.Chore.5] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read user from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=14624250: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2885,methodName=Get], waitTime=14623400ms, rpcTimeout=60000ms row 'u.jenkins' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchUserQuotas(QuotaUtil.java:278) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$3.fetchEntries(QuotaCache.java:274) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchUserQuotaState(QuotaCache.java:266) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:227) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2885,methodName=Get], waitTime=14623400ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2885,methodName=Get], waitTime=14623400ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:06:44,630 WARN [regionserver/jenkins-hbase4:0.Chore.4] quotas.QuotaCache$QuotaRefresherChore(344): Unable to read user from quota table java.net.SocketTimeoutException: callTimeout=1200000, callDuration=14619152: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2887,methodName=Get], waitTime=14618200ms, rpcTimeout=60000ms row 'u.jenkins' on table 'hbase:quota' at region=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:385) at org.apache.hadoop.hbase.quotas.QuotaTableUtil.doGet(QuotaTableUtil.java:910) at org.apache.hadoop.hbase.quotas.QuotaUtil.fetchUserQuotas(QuotaUtil.java:278) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore$3.fetchEntries(QuotaCache.java:274) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetch(QuotaCache.java:333) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.fetchUserQuotaState(QuotaCache.java:266) at org.apache.hadoop.hbase.quotas.QuotaCache$QuotaRefresherChore.chore(QuotaCache.java:227) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:158) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:107) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to address=jenkins-hbase4.apache.org:41841 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2887,methodName=Get], waitTime=14618200ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:219) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:384) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:108) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:134) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) at org.apache.hbase.thirdparty.io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) at org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ... 1 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call[id=2887,methodName=Get], waitTime=14618200ms, rpcTimeout=60000ms at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:135) ... 6 more 2023-11-27 05:06:44,633 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(144): callId: 2961 service: ClientService methodName: Scan size: 137 connection: 172.31.14.131:53770 deadline: 1703953262828, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:48,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] quotas.RegionServerRpcQuotaManager(222): Throttling exception for user=jenkins table=TestQuotaAdmin0 numWrites=0 numReads=1 numScans=0: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:48,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41841] ipc.CallRunner(144): callId: 291 service: ClientService methodName: Get size: 114 connection: 172.31.14.131:47240 deadline: 1703953212828, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms 2023-11-27 05:06:48,973 DEBUG [Listener at localhost/34689] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=16, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) , details=row 'row-6' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2, see https://s.apache.org/timeout, exception=org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:48,974 ERROR [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(100): get failed after nRetries=6 java.net.SocketTimeoutException: callTimeout=10000, callDuration=10046: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) row 'row-6' on table 'TestQuotaAdmin0' at region=TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d., hostname=jenkins-hbase4.apache.org,41841,1701061176322, seqNum=2 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:156) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:370) at org.apache.hadoop.hbase.client.HTable.get(HTable.java:343) at org.apache.hadoop.hbase.quotas.ThrottleQuotaTestUtil.doGets(ThrottleQuotaTestUtil.java:95) at org.apache.hadoop.hbase.quotas.TestClusterScopeQuotaThrottle.testNamespaceClusterScopeQuota(TestClusterScopeQuotaThrottle.java:129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.quotas.RpcThrottlingException: org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor43.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:276) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:261) at org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:126) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:104) ... 29 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.quotas.RpcThrottlingException): org.apache.hadoop.hbase.quotas.RpcThrottlingException: number of read requests exceeded - wait 10sec, 0ms at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:133) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwNumReadRequestsExceeded(RpcThrottlingException.java:99) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:172) at org.apache.hadoop.hbase.quotas.DefaultOperationQuota.checkQuota(DefaultOperationQuota.java:82) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:220) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:168) at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2532) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44992) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:381) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:87) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:193) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:214) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-11-27 05:06:49,023 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: q of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1703946002828 is > 3600000 from now =1703956802828 2023-11-27 05:06:49,024 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2489): Flush column family: u of be5ef4f3dfb2c43b447798061e19f02f because time of oldest edit=1703946002828 is > 3600000 from now =1703956802828 2023-11-27 05:06:49,024 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing be5ef4f3dfb2c43b447798061e19f02f 2/2 column families, dataSize=236 B heapSize=1.16 KB 2023-11-27 05:06:49,040 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=167 B at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/7408bb6eb56c47a7b85bc864b5a3823e 2023-11-27 05:06:49,047 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7408bb6eb56c47a7b85bc864b5a3823e 2023-11-27 05:06:49,058 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=69 B at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/u/7e415346147540a98347aece34392504 2023-11-27 05:06:49,063 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7e415346147540a98347aece34392504 2023-11-27 05:06:49,064 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/q/7408bb6eb56c47a7b85bc864b5a3823e as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/7408bb6eb56c47a7b85bc864b5a3823e 2023-11-27 05:06:49,070 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7408bb6eb56c47a7b85bc864b5a3823e 2023-11-27 05:06:49,070 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/q/7408bb6eb56c47a7b85bc864b5a3823e, entries=2, sequenceid=29, filesize=5.0 K 2023-11-27 05:06:49,071 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/.tmp/u/7e415346147540a98347aece34392504 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u/7e415346147540a98347aece34392504 2023-11-27 05:06:49,076 INFO [MemStoreFlusher.0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7e415346147540a98347aece34392504 2023-11-27 05:06:49,076 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/u/7e415346147540a98347aece34392504, entries=2, sequenceid=29, filesize=5.0 K 2023-11-27 05:06:49,077 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~236 B/236, heapSize ~1.13 KB/1160, currentSize=0 B/0 for be5ef4f3dfb2c43b447798061e19f02f in 0ms, sequenceid=29, compaction requested=false 2023-11-27 05:06:49,077 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 05:06:49,228 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:49,228 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703956802828 bypass), TestNs=QuotaState(ts=1703956802828 bypass)} 2023-11-27 05:06:49,228 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703956802828 bypass), TestNs:TestTable=QuotaState(ts=1703956802828 bypass), TestQuotaAdmin1=QuotaState(ts=1703956802828 bypass)} 2023-11-27 05:06:49,228 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {} 2023-11-27 05:06:49,228 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703956802828 bypass)} 2023-11-27 05:06:49,478 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(184): QuotaCache 2023-11-27 05:06:49,478 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(185): {default=QuotaState(ts=1703956802828 bypass), TestNs=QuotaState(ts=1703956802828 bypass)} 2023-11-27 05:06:49,478 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(186): {TestQuotaAdmin0=QuotaState(ts=1703956802828 bypass), TestNs:TestTable=QuotaState(ts=1703956802828 bypass), TestQuotaAdmin2=QuotaState(ts=1703956802828 bypass)} 2023-11-27 05:06:49,478 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(187): {jenkins=UserQuotaState(ts=1703956802828 bypass)} 2023-11-27 05:06:49,478 DEBUG [Listener at localhost/34689] quotas.ThrottleQuotaTestUtil(188): {all=QuotaState(ts=1703956802828 bypass)} 2023-11-27 05:06:49,491 INFO [Listener at localhost/34689] hbase.ResourceChecker(175): after: quotas.TestClusterScopeQuotaThrottle#testNamespaceClusterScopeQuota Thread=297 (was 291) - Thread LEAK? -, OpenFileDescriptor=630 (was 609) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=53 (was 81), ProcessCount=166 (was 166), AvailableMemoryMB=7307 (was 7294) - AvailableMemoryMB LEAK? - 2023-11-27 05:06:49,493 INFO [Listener at localhost/34689] client.HBaseAdmin$15(890): Started disable of TestQuotaAdmin0 2023-11-27 05:06:49,497 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable TestQuotaAdmin0 2023-11-27 05:06:49,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=36, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=TestQuotaAdmin0 2023-11-27 05:06:49,506 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin0","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061609506"}]},"ts":"1701061609506"} 2023-11-27 05:06:49,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=36 2023-11-27 05:06:49,509 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36520, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-11-27 05:06:49,510 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin0, state=DISABLING in hbase:meta 2023-11-27 05:06:49,512 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set TestQuotaAdmin0 to state=DISABLING 2023-11-27 05:06:49,513 INFO [PEWorker-3] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=37, ppid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin0, region=44ac23936652c71f70e8746cf757ab6d, UNASSIGN}] 2023-11-27 05:06:49,514 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=37, ppid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin0, region=44ac23936652c71f70e8746cf757ab6d, UNASSIGN 2023-11-27 05:06:49,515 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=44ac23936652c71f70e8746cf757ab6d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:49,515 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061609515"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061609515"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061609515"}]},"ts":"1701061609515"} 2023-11-27 05:06:49,516 INFO [PEWorker-2] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=38, ppid=37, state=RUNNABLE; CloseRegionProcedure 44ac23936652c71f70e8746cf757ab6d, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 05:06:49,668 DEBUG [RSProcedureDispatcher-pool-6] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:49,669 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51074, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-11-27 05:06:49,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 44ac23936652c71f70e8746cf757ab6d 2023-11-27 05:06:49,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 44ac23936652c71f70e8746cf757ab6d, disabling compactions & flushes 2023-11-27 05:06:49,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 05:06:49,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 05:06:49,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. after waiting 0 ms 2023-11-27 05:06:49,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 05:06:49,670 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 44ac23936652c71f70e8746cf757ab6d 1/1 column families, dataSize=170 B heapSize=816 B 2023-11-27 05:06:49,684 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=170 B at sequenceid=24 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/.tmp/cf/5042d4b5c8be4fddb4c5501c4c2135cd 2023-11-27 05:06:49,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/.tmp/cf/5042d4b5c8be4fddb4c5501c4c2135cd as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/5042d4b5c8be4fddb4c5501c4c2135cd 2023-11-27 05:06:49,696 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/5042d4b5c8be4fddb4c5501c4c2135cd, entries=5, sequenceid=24, filesize=4.7 K 2023-11-27 05:06:49,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~170 B/170, heapSize ~800 B/800, currentSize=0 B/0 for 44ac23936652c71f70e8746cf757ab6d in 27ms, sequenceid=24, compaction requested=false 2023-11-27 05:06:49,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/recovered.edits/27.seqid, newMaxSeqId=27, maxSeqId=1 2023-11-27 05:06:49,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d. 2023-11-27 05:06:49,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 44ac23936652c71f70e8746cf757ab6d: 2023-11-27 05:06:49,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 44ac23936652c71f70e8746cf757ab6d 2023-11-27 05:06:49,706 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=44ac23936652c71f70e8746cf757ab6d, regionState=CLOSED 2023-11-27 05:06:49,706 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061609705"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061609705"}]},"ts":"1701061609705"} 2023-11-27 05:06:49,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=38, resume processing ppid=37 2023-11-27 05:06:49,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=38, ppid=37, state=SUCCESS; CloseRegionProcedure 44ac23936652c71f70e8746cf757ab6d, server=jenkins-hbase4.apache.org,41841,1701061176322 in 191 msec 2023-11-27 05:06:49,710 INFO [PEWorker-5] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=37, resume processing ppid=36 2023-11-27 05:06:49,711 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=37, ppid=36, state=SUCCESS; TransitRegionStateProcedure table=TestQuotaAdmin0, region=44ac23936652c71f70e8746cf757ab6d, UNASSIGN in 196 msec 2023-11-27 05:06:49,711 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin0","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061609711"}]},"ts":"1701061609711"} 2023-11-27 05:06:49,712 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin0, state=DISABLED in hbase:meta 2023-11-27 05:06:49,714 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set TestQuotaAdmin0 to state=DISABLED 2023-11-27 05:06:49,716 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=36, state=SUCCESS; DisableTableProcedure table=TestQuotaAdmin0 in 216 msec 2023-11-27 05:06:49,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=36 2023-11-27 05:06:49,759 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:TestQuotaAdmin0, procId: 36 completed 2023-11-27 05:06:49,764 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete TestQuotaAdmin0 2023-11-27 05:06:49,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=39, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=TestQuotaAdmin0 2023-11-27 05:06:49,771 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=39, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=TestQuotaAdmin0 2023-11-27 05:06:49,773 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=39, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=TestQuotaAdmin0 2023-11-27 05:06:49,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=39 2023-11-27 05:06:49,779 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d 2023-11-27 05:06:49,782 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf, FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/recovered.edits] 2023-11-27 05:06:49,790 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/0dfcfdc2ec834268aae732a3ba7f025a to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/archive/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/0dfcfdc2ec834268aae732a3ba7f025a 2023-11-27 05:06:49,791 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/5042d4b5c8be4fddb4c5501c4c2135cd to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/archive/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/cf/5042d4b5c8be4fddb4c5501c4c2135cd 2023-11-27 05:06:49,795 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/recovered.edits/27.seqid to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/archive/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d/recovered.edits/27.seqid 2023-11-27 05:06:49,796 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin0/44ac23936652c71f70e8746cf757ab6d 2023-11-27 05:06:49,796 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived TestQuotaAdmin0 regions 2023-11-27 05:06:49,799 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=39, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=TestQuotaAdmin0 2023-11-27 05:06:49,806 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of TestQuotaAdmin0 from hbase:meta 2023-11-27 05:06:49,808 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'TestQuotaAdmin0' descriptor. 2023-11-27 05:06:49,809 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=39, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=TestQuotaAdmin0 2023-11-27 05:06:49,809 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'TestQuotaAdmin0' from region states. 2023-11-27 05:06:49,809 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1701061609809"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:49,811 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-11-27 05:06:49,811 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 44ac23936652c71f70e8746cf757ab6d, NAME => 'TestQuotaAdmin0,,1701061179762.44ac23936652c71f70e8746cf757ab6d.', STARTKEY => '', ENDKEY => ''}] 2023-11-27 05:06:49,811 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'TestQuotaAdmin0' as deleted. 2023-11-27 05:06:49,811 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestQuotaAdmin0","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1701061609811"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:49,813 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table TestQuotaAdmin0 state from META 2023-11-27 05:06:49,816 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=39, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=TestQuotaAdmin0 2023-11-27 05:06:49,818 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=39, state=SUCCESS; DeleteTableProcedure table=TestQuotaAdmin0 in 51 msec 2023-11-27 05:06:50,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=39 2023-11-27 05:06:50,029 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:TestQuotaAdmin0, procId: 39 completed 2023-11-27 05:06:50,029 INFO [Listener at localhost/34689] client.HBaseAdmin$15(890): Started disable of TestQuotaAdmin1 2023-11-27 05:06:50,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable TestQuotaAdmin1 2023-11-27 05:06:50,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=40, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=TestQuotaAdmin1 2023-11-27 05:06:50,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=40 2023-11-27 05:06:50,033 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061610033"}]},"ts":"1701061610033"} 2023-11-27 05:06:50,034 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin1, state=DISABLING in hbase:meta 2023-11-27 05:06:50,037 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set TestQuotaAdmin1 to state=DISABLING 2023-11-27 05:06:50,037 INFO [PEWorker-4] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=41, ppid=40, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, UNASSIGN}] 2023-11-27 05:06:50,039 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=40, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, UNASSIGN 2023-11-27 05:06:50,040 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:06:50,040 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061610040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061610040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061610040"}]},"ts":"1701061610040"} 2023-11-27 05:06:50,041 INFO [PEWorker-1] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE; CloseRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41853,1701061176279}] 2023-11-27 05:06:50,193 DEBUG [RSProcedureDispatcher-pool-7] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:06:50,195 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36524, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-11-27 05:06:50,195 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:06:50,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2d48041eaba6bc404a22a735fb3000dd, disabling compactions & flushes 2023-11-27 05:06:50,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:06:50,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:06:50,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. after waiting 0 ms 2023-11-27 05:06:50,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:06:50,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-11-27 05:06:50,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd. 2023-11-27 05:06:50,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2d48041eaba6bc404a22a735fb3000dd: 2023-11-27 05:06:50,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:06:50,207 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=2d48041eaba6bc404a22a735fb3000dd, regionState=CLOSED 2023-11-27 05:06:50,207 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061610207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061610207"}]},"ts":"1701061610207"} 2023-11-27 05:06:50,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=42, resume processing ppid=41 2023-11-27 05:06:50,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=42, ppid=41, state=SUCCESS; CloseRegionProcedure 2d48041eaba6bc404a22a735fb3000dd, server=jenkins-hbase4.apache.org,41853,1701061176279 in 168 msec 2023-11-27 05:06:50,212 INFO [PEWorker-2] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=41, resume processing ppid=40 2023-11-27 05:06:50,213 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=41, ppid=40, state=SUCCESS; TransitRegionStateProcedure table=TestQuotaAdmin1, region=2d48041eaba6bc404a22a735fb3000dd, UNASSIGN in 174 msec 2023-11-27 05:06:50,213 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061610213"}]},"ts":"1701061610213"} 2023-11-27 05:06:50,214 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin1, state=DISABLED in hbase:meta 2023-11-27 05:06:50,216 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set TestQuotaAdmin1 to state=DISABLED 2023-11-27 05:06:50,218 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=40, state=SUCCESS; DisableTableProcedure table=TestQuotaAdmin1 in 188 msec 2023-11-27 05:06:50,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=40 2023-11-27 05:06:50,284 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:TestQuotaAdmin1, procId: 40 completed 2023-11-27 05:06:50,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete TestQuotaAdmin1 2023-11-27 05:06:50,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=43, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=TestQuotaAdmin1 2023-11-27 05:06:50,287 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=43, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=TestQuotaAdmin1 2023-11-27 05:06:50,287 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=43, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=TestQuotaAdmin1 2023-11-27 05:06:50,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=43 2023-11-27 05:06:50,293 DEBUG [HFileArchiver-9] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:06:50,295 DEBUG [HFileArchiver-9] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/cf, FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/recovered.edits] 2023-11-27 05:06:50,300 DEBUG [HFileArchiver-9] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/recovered.edits/7.seqid to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/archive/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd/recovered.edits/7.seqid 2023-11-27 05:06:50,301 DEBUG [HFileArchiver-9] backup.HFileArchiver(596): Deleted hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin1/2d48041eaba6bc404a22a735fb3000dd 2023-11-27 05:06:50,301 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived TestQuotaAdmin1 regions 2023-11-27 05:06:50,304 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=43, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=TestQuotaAdmin1 2023-11-27 05:06:50,306 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of TestQuotaAdmin1 from hbase:meta 2023-11-27 05:06:50,307 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'TestQuotaAdmin1' descriptor. 2023-11-27 05:06:50,308 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=43, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=TestQuotaAdmin1 2023-11-27 05:06:50,308 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'TestQuotaAdmin1' from region states. 2023-11-27 05:06:50,309 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1701061610308"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:50,310 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-11-27 05:06:50,310 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2d48041eaba6bc404a22a735fb3000dd, NAME => 'TestQuotaAdmin1,,1701061180563.2d48041eaba6bc404a22a735fb3000dd.', STARTKEY => '', ENDKEY => ''}] 2023-11-27 05:06:50,310 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'TestQuotaAdmin1' as deleted. 2023-11-27 05:06:50,310 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestQuotaAdmin1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1701061610310"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:50,311 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table TestQuotaAdmin1 state from META 2023-11-27 05:06:50,313 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=43, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=TestQuotaAdmin1 2023-11-27 05:06:50,314 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=43, state=SUCCESS; DeleteTableProcedure table=TestQuotaAdmin1 in 29 msec 2023-11-27 05:06:50,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=43 2023-11-27 05:06:50,539 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:TestQuotaAdmin1, procId: 43 completed 2023-11-27 05:06:50,539 INFO [Listener at localhost/34689] client.HBaseAdmin$15(890): Started disable of TestQuotaAdmin2 2023-11-27 05:06:50,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable TestQuotaAdmin2 2023-11-27 05:06:50,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=TestQuotaAdmin2 2023-11-27 05:06:50,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-11-27 05:06:50,543 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin2","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061610543"}]},"ts":"1701061610543"} 2023-11-27 05:06:50,545 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin2, state=DISABLING in hbase:meta 2023-11-27 05:06:50,550 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set TestQuotaAdmin2 to state=DISABLING 2023-11-27 05:06:50,551 INFO [PEWorker-5] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin2, region=84bdb1fdf146da2514bc4d0d11f47654, UNASSIGN}] 2023-11-27 05:06:50,552 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestQuotaAdmin2, region=84bdb1fdf146da2514bc4d0d11f47654, UNASSIGN 2023-11-27 05:06:50,553 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=84bdb1fdf146da2514bc4d0d11f47654, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:50,553 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061610553"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061610553"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061610553"}]},"ts":"1701061610553"} 2023-11-27 05:06:50,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=46, ppid=45, state=RUNNABLE; CloseRegionProcedure 84bdb1fdf146da2514bc4d0d11f47654, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 05:06:50,707 DEBUG [RSProcedureDispatcher-pool-8] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:50,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 05:06:50,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 84bdb1fdf146da2514bc4d0d11f47654, disabling compactions & flushes 2023-11-27 05:06:50,708 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 05:06:50,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 05:06:50,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. after waiting 0 ms 2023-11-27 05:06:50,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 05:06:50,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-11-27 05:06:50,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654. 2023-11-27 05:06:50,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 84bdb1fdf146da2514bc4d0d11f47654: 2023-11-27 05:06:50,715 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 05:06:50,716 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=84bdb1fdf146da2514bc4d0d11f47654, regionState=CLOSED 2023-11-27 05:06:50,716 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.","families":{"info":[{"qualifier":"regioninfo","vlen":49,"tag":[],"timestamp":"1701061610716"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061610716"}]},"ts":"1701061610716"} 2023-11-27 05:06:50,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=46, resume processing ppid=45 2023-11-27 05:06:50,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=46, ppid=45, state=SUCCESS; CloseRegionProcedure 84bdb1fdf146da2514bc4d0d11f47654, server=jenkins-hbase4.apache.org,41841,1701061176322 in 162 msec 2023-11-27 05:06:50,721 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=45, resume processing ppid=44 2023-11-27 05:06:50,721 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=TestQuotaAdmin2, region=84bdb1fdf146da2514bc4d0d11f47654, UNASSIGN in 168 msec 2023-11-27 05:06:50,722 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestQuotaAdmin2","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061610722"}]},"ts":"1701061610722"} 2023-11-27 05:06:50,723 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=TestQuotaAdmin2, state=DISABLED in hbase:meta 2023-11-27 05:06:50,725 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set TestQuotaAdmin2 to state=DISABLED 2023-11-27 05:06:50,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=44, state=SUCCESS; DisableTableProcedure table=TestQuotaAdmin2 in 186 msec 2023-11-27 05:06:50,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-11-27 05:06:50,794 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:TestQuotaAdmin2, procId: 44 completed 2023-11-27 05:06:50,795 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete TestQuotaAdmin2 2023-11-27 05:06:50,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=47, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=TestQuotaAdmin2 2023-11-27 05:06:50,797 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=47, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=TestQuotaAdmin2 2023-11-27 05:06:50,797 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=47, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=TestQuotaAdmin2 2023-11-27 05:06:50,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-11-27 05:06:50,801 DEBUG [HFileArchiver-10] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 05:06:50,803 DEBUG [HFileArchiver-10] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/cf, FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/recovered.edits] 2023-11-27 05:06:50,809 DEBUG [HFileArchiver-10] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/recovered.edits/4.seqid to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/archive/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654/recovered.edits/4.seqid 2023-11-27 05:06:50,809 DEBUG [HFileArchiver-10] backup.HFileArchiver(596): Deleted hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/default/TestQuotaAdmin2/84bdb1fdf146da2514bc4d0d11f47654 2023-11-27 05:06:50,809 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestQuotaAdmin2 regions 2023-11-27 05:06:50,811 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=47, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=TestQuotaAdmin2 2023-11-27 05:06:50,813 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of TestQuotaAdmin2 from hbase:meta 2023-11-27 05:06:50,815 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'TestQuotaAdmin2' descriptor. 2023-11-27 05:06:50,816 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=47, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=TestQuotaAdmin2 2023-11-27 05:06:50,816 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'TestQuotaAdmin2' from region states. 2023-11-27 05:06:50,816 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1701061610816"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:50,817 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-11-27 05:06:50,817 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 84bdb1fdf146da2514bc4d0d11f47654, NAME => 'TestQuotaAdmin2,,1701061181338.84bdb1fdf146da2514bc4d0d11f47654.', STARTKEY => '', ENDKEY => ''}] 2023-11-27 05:06:50,817 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'TestQuotaAdmin2' as deleted. 2023-11-27 05:06:50,817 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestQuotaAdmin2","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1701061610817"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:50,818 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table TestQuotaAdmin2 state from META 2023-11-27 05:06:50,820 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=47, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=TestQuotaAdmin2 2023-11-27 05:06:50,821 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=47, state=SUCCESS; DeleteTableProcedure table=TestQuotaAdmin2 in 25 msec 2023-11-27 05:06:51,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-11-27 05:06:51,050 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:TestQuotaAdmin2, procId: 47 completed 2023-11-27 05:06:51,050 INFO [Listener at localhost/34689] client.HBaseAdmin$15(890): Started disable of TestNs:TestTable 2023-11-27 05:06:51,050 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable TestNs:TestTable 2023-11-27 05:06:51,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=48, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=TestNs:TestTable 2023-11-27 05:06:51,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-11-27 05:06:51,054 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestNs:TestTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061611054"}]},"ts":"1701061611054"} 2023-11-27 05:06:51,055 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestNs:TestTable, state=DISABLING in hbase:meta 2023-11-27 05:06:51,057 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set TestNs:TestTable to state=DISABLING 2023-11-27 05:06:51,058 INFO [PEWorker-2] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=49, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestNs:TestTable, region=af1d13366c8d51157b132094b9c56138, UNASSIGN}, {pid=50, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestNs:TestTable, region=7197a56a05fdba581e9677273ff1da17, UNASSIGN}] 2023-11-27 05:06:51,059 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestNs:TestTable, region=af1d13366c8d51157b132094b9c56138, UNASSIGN 2023-11-27 05:06:51,059 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=TestNs:TestTable, region=7197a56a05fdba581e9677273ff1da17, UNASSIGN 2023-11-27 05:06:51,060 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=af1d13366c8d51157b132094b9c56138, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:51,060 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7197a56a05fdba581e9677273ff1da17, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:06:51,060 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061611060"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061611060"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061611060"}]},"ts":"1701061611060"} 2023-11-27 05:06:51,060 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061611060"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1701061611060"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1701061611060"}]},"ts":"1701061611060"} 2023-11-27 05:06:51,061 INFO [PEWorker-2] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=51, ppid=49, state=RUNNABLE; CloseRegionProcedure af1d13366c8d51157b132094b9c56138, server=jenkins-hbase4.apache.org,41841,1701061176322}] 2023-11-27 05:06:51,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1680): Initialized subprocedures=[{pid=52, ppid=50, state=RUNNABLE; CloseRegionProcedure 7197a56a05fdba581e9677273ff1da17, server=jenkins-hbase4.apache.org,41853,1701061176279}] 2023-11-27 05:06:51,213 DEBUG [RSProcedureDispatcher-pool-6] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:51,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close af1d13366c8d51157b132094b9c56138 2023-11-27 05:06:51,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing af1d13366c8d51157b132094b9c56138, disabling compactions & flushes 2023-11-27 05:06:51,214 DEBUG [RSProcedureDispatcher-pool-7] master.ServerManager(702): New admin connection to jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:06:51,214 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 05:06:51,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 05:06:51,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. after waiting 0 ms 2023-11-27 05:06:51,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 05:06:51,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7197a56a05fdba581e9677273ff1da17 2023-11-27 05:06:51,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7197a56a05fdba581e9677273ff1da17, disabling compactions & flushes 2023-11-27 05:06:51,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 05:06:51,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 05:06:51,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. after waiting 0 ms 2023-11-27 05:06:51,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 05:06:51,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-11-27 05:06:51,219 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-11-27 05:06:51,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138. 2023-11-27 05:06:51,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17. 2023-11-27 05:06:51,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7197a56a05fdba581e9677273ff1da17: 2023-11-27 05:06:51,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for af1d13366c8d51157b132094b9c56138: 2023-11-27 05:06:51,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed af1d13366c8d51157b132094b9c56138 2023-11-27 05:06:51,223 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=af1d13366c8d51157b132094b9c56138, regionState=CLOSED 2023-11-27 05:06:51,223 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061611223"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061611223"}]},"ts":"1701061611223"} 2023-11-27 05:06:51,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7197a56a05fdba581e9677273ff1da17 2023-11-27 05:06:51,223 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7197a56a05fdba581e9677273ff1da17, regionState=CLOSED 2023-11-27 05:06:51,224 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.","families":{"info":[{"qualifier":"regioninfo","vlen":43,"tag":[],"timestamp":"1701061611223"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1701061611223"}]},"ts":"1701061611223"} 2023-11-27 05:06:51,226 INFO [PEWorker-1] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=51, resume processing ppid=49 2023-11-27 05:06:51,226 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=51, ppid=49, state=SUCCESS; CloseRegionProcedure af1d13366c8d51157b132094b9c56138, server=jenkins-hbase4.apache.org,41841,1701061176322 in 163 msec 2023-11-27 05:06:51,226 INFO [PEWorker-2] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=52, resume processing ppid=50 2023-11-27 05:06:51,227 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=49, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=TestNs:TestTable, region=af1d13366c8d51157b132094b9c56138, UNASSIGN in 168 msec 2023-11-27 05:06:51,227 INFO [PEWorker-2] procedure2.ProcedureExecutor(1409): Finished pid=52, ppid=50, state=SUCCESS; CloseRegionProcedure 7197a56a05fdba581e9677273ff1da17, server=jenkins-hbase4.apache.org,41853,1701061176279 in 163 msec 2023-11-27 05:06:51,228 INFO [PEWorker-5] procedure2.ProcedureExecutor(1825): Finished subprocedure pid=50, resume processing ppid=48 2023-11-27 05:06:51,228 INFO [PEWorker-5] procedure2.ProcedureExecutor(1409): Finished pid=50, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=TestNs:TestTable, region=7197a56a05fdba581e9677273ff1da17, UNASSIGN in 168 msec 2023-11-27 05:06:51,229 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestNs:TestTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1701061611229"}]},"ts":"1701061611229"} 2023-11-27 05:06:51,230 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestNs:TestTable, state=DISABLED in hbase:meta 2023-11-27 05:06:51,232 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set TestNs:TestTable to state=DISABLED 2023-11-27 05:06:51,233 INFO [PEWorker-3] procedure2.ProcedureExecutor(1409): Finished pid=48, state=SUCCESS; DisableTableProcedure table=TestNs:TestTable in 181 msec 2023-11-27 05:06:51,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-11-27 05:06:51,304 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: TestNs:TestTable, procId: 48 completed 2023-11-27 05:06:51,305 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete TestNs:TestTable 2023-11-27 05:06:51,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=53, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=TestNs:TestTable 2023-11-27 05:06:51,307 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=53, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=TestNs:TestTable 2023-11-27 05:06:51,308 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=53, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=TestNs:TestTable 2023-11-27 05:06:51,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=53 2023-11-27 05:06:51,312 DEBUG [HFileArchiver-11] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138 2023-11-27 05:06:51,312 DEBUG [HFileArchiver-12] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17 2023-11-27 05:06:51,314 DEBUG [HFileArchiver-11] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/cf, FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/recovered.edits] 2023-11-27 05:06:51,314 DEBUG [HFileArchiver-12] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/cf, FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/recovered.edits] 2023-11-27 05:06:51,319 DEBUG [HFileArchiver-11] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/recovered.edits/4.seqid to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/archive/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138/recovered.edits/4.seqid 2023-11-27 05:06:51,320 DEBUG [HFileArchiver-12] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/recovered.edits/4.seqid to hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/archive/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17/recovered.edits/4.seqid 2023-11-27 05:06:51,320 DEBUG [HFileArchiver-11] backup.HFileArchiver(596): Deleted hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/af1d13366c8d51157b132094b9c56138 2023-11-27 05:06:51,320 DEBUG [HFileArchiver-12] backup.HFileArchiver(596): Deleted hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/.tmp/data/TestNs/TestTable/7197a56a05fdba581e9677273ff1da17 2023-11-27 05:06:51,320 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived TestNs:TestTable regions 2023-11-27 05:06:51,323 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=53, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=TestNs:TestTable 2023-11-27 05:06:51,324 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 2 rows of TestNs:TestTable from hbase:meta 2023-11-27 05:06:51,326 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'TestNs:TestTable' descriptor. 2023-11-27 05:06:51,327 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=53, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=TestNs:TestTable 2023-11-27 05:06:51,327 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'TestNs:TestTable' from region states. 2023-11-27 05:06:51,327 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1701061611327"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:51,327 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1701061611327"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:51,329 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 2 regions from META 2023-11-27 05:06:51,329 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => af1d13366c8d51157b132094b9c56138, NAME => 'TestNs:TestTable,,1701061183127.af1d13366c8d51157b132094b9c56138.', STARTKEY => '', ENDKEY => '1'}, {ENCODED => 7197a56a05fdba581e9677273ff1da17, NAME => 'TestNs:TestTable,1,1701061183127.7197a56a05fdba581e9677273ff1da17.', STARTKEY => '1', ENDKEY => ''}] 2023-11-27 05:06:51,329 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'TestNs:TestTable' as deleted. 2023-11-27 05:06:51,329 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"TestNs:TestTable","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1701061611329"}]},"ts":"9223372036854775807"} 2023-11-27 05:06:51,330 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table TestNs:TestTable state from META 2023-11-27 05:06:51,334 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=53, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=TestNs:TestTable 2023-11-27 05:06:51,335 INFO [PEWorker-1] procedure2.ProcedureExecutor(1409): Finished pid=53, state=SUCCESS; DeleteTableProcedure table=TestNs:TestTable in 29 msec 2023-11-27 05:06:51,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=53 2023-11-27 05:06:51,560 INFO [Listener at localhost/34689] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: TestNs:TestTable, procId: 53 completed 2023-11-27 05:06:51,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete TestNs 2023-11-27 05:06:51,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] procedure2.ProcedureExecutor(1028): Stored pid=54, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=TestNs 2023-11-27 05:06:51,571 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=54, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=TestNs 2023-11-27 05:06:51,572 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41841] ipc.CallRunner(144): callId: 167 service: ClientService methodName: Get size: 117 connection: 172.31.14.131:50776 deadline: 1701061671572, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. is not online on jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:51,833 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=54, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=TestNs 2023-11-27 05:06:51,835 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=54, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=TestNs 2023-11-27 05:06:51,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=54 2023-11-27 05:06:51,837 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/TestNs 2023-11-27 05:06:51,837 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-11-27 05:06:51,838 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=54, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=TestNs 2023-11-27 05:06:51,840 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=54, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=TestNs 2023-11-27 05:06:51,841 INFO [PEWorker-4] procedure2.ProcedureExecutor(1409): Finished pid=54, state=SUCCESS; DeleteNamespaceProcedure, namespace=TestNs in 275 msec 2023-11-27 05:06:52,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33323] master.MasterRpcServices(1230): Checking to see if procedure is done pid=54 2023-11-27 05:06:52,087 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-11-27 05:06:52,087 INFO [Listener at localhost/34689] client.ConnectionImplementation(1973): Closing master protocol: MasterService 2023-11-27 05:06:52,087 DEBUG [Listener at localhost/34689] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x028b264a to 127.0.0.1:50029 2023-11-27 05:06:52,087 DEBUG [Listener at localhost/34689] ipc.AbstractRpcClient(489): Stopping rpc client 2023-11-27 05:06:52,088 DEBUG [Listener at localhost/34689] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-11-27 05:06:52,088 DEBUG [Listener at localhost/34689] util.JVMClusterUtil(257): Found active master hash=1146852219, stopped=false 2023-11-27 05:06:52,088 DEBUG [Listener at localhost/34689] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-11-27 05:06:52,088 INFO [Listener at localhost/34689] master.ServerManager(888): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 05:06:52,090 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-11-27 05:06:52,090 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-11-27 05:06:52,090 INFO [Listener at localhost/34689] procedure2.ProcedureExecutor(628): Stopping 2023-11-27 05:06:52,090 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 05:06:52,090 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-11-27 05:06:52,091 DEBUG [Listener at localhost/34689] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3937d8f7 to 127.0.0.1:50029 2023-11-27 05:06:52,091 DEBUG [Listener at localhost/34689] ipc.AbstractRpcClient(489): Stopping rpc client 2023-11-27 05:06:52,091 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(165): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-11-27 05:06:52,092 INFO [Listener at localhost/34689] regionserver.HRegionServer(2300): ***** STOPPING region server 'jenkins-hbase4.apache.org,41853,1701061176279' ***** 2023-11-27 05:06:52,092 INFO [Listener at localhost/34689] regionserver.HRegionServer(2314): STOPPED: Shutdown requested 2023-11-27 05:06:52,091 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(165): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-11-27 05:06:52,091 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-11-27 05:06:52,092 INFO [Listener at localhost/34689] regionserver.HRegionServer(2300): ***** STOPPING region server 'jenkins-hbase4.apache.org,41841,1701061176322' ***** 2023-11-27 05:06:52,092 INFO [Listener at localhost/34689] regionserver.HRegionServer(2314): STOPPED: Shutdown requested 2023-11-27 05:06:52,092 INFO [RS:0;jenkins-hbase4:41853] regionserver.HeapMemoryManager(220): Stopping 2023-11-27 05:06:52,092 INFO [RS:1;jenkins-hbase4:41841] regionserver.HeapMemoryManager(220): Stopping 2023-11-27 05:06:52,093 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-11-27 05:06:52,093 INFO [RS:1;jenkins-hbase4:41841] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-11-27 05:06:52,093 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-11-27 05:06:52,093 INFO [RS:1;jenkins-hbase4:41841] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-11-27 05:06:52,093 INFO [RS:0;jenkins-hbase4:41853] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-11-27 05:06:52,093 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(3308): Received CLOSE for be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 05:06:52,093 INFO [RS:0;jenkins-hbase4:41853] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-11-27 05:06:52,093 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(3308): Received CLOSE for 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:06:52,094 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1147): stopping server jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:52,094 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1147): stopping server jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:06:52,094 INFO [RS:1;jenkins-hbase4:41841] client.ConnectionImplementation(1973): Closing master protocol: MasterService 2023-11-27 05:06:52,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be5ef4f3dfb2c43b447798061e19f02f, disabling compactions & flushes 2023-11-27 05:06:52,094 INFO [RS:0;jenkins-hbase4:41853] client.ConnectionImplementation(1973): Closing master protocol: MasterService 2023-11-27 05:06:52,094 DEBUG [RS:1;jenkins-hbase4:41841] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7b053c7c to 127.0.0.1:50029 2023-11-27 05:06:52,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 05:06:52,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 708094d2c6013f8353947ca009f33ef1, disabling compactions & flushes 2023-11-27 05:06:52,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 05:06:52,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:06:52,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:06:52,095 DEBUG [RS:1;jenkins-hbase4:41841] ipc.AbstractRpcClient(489): Stopping rpc client 2023-11-27 05:06:52,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. after waiting 0 ms 2023-11-27 05:06:52,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. after waiting 0 ms 2023-11-27 05:06:52,095 DEBUG [RS:0;jenkins-hbase4:41853] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3c445bdd to 127.0.0.1:50029 2023-11-27 05:06:52,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 05:06:52,095 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1477): Waiting on 1 regions to close 2023-11-27 05:06:52,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:06:52,095 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1481): Online Regions={be5ef4f3dfb2c43b447798061e19f02f=hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f.} 2023-11-27 05:06:52,095 DEBUG [RS:0;jenkins-hbase4:41853] ipc.AbstractRpcClient(489): Stopping rpc client 2023-11-27 05:06:52,095 INFO [RS:0;jenkins-hbase4:41853] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-11-27 05:06:52,095 INFO [RS:0;jenkins-hbase4:41853] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-11-27 05:06:52,095 INFO [RS:0;jenkins-hbase4:41853] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-11-27 05:06:52,095 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(3308): Received CLOSE for 1588230740 2023-11-27 05:06:52,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 708094d2c6013f8353947ca009f33ef1 1/1 column families, dataSize=30 B heapSize=360 B 2023-11-27 05:06:52,097 DEBUG [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1507): Waiting on be5ef4f3dfb2c43b447798061e19f02f 2023-11-27 05:06:52,097 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1477): Waiting on 2 regions to close 2023-11-27 05:06:52,098 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-11-27 05:06:52,098 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1481): Online Regions={708094d2c6013f8353947ca009f33ef1=hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1., 1588230740=hbase:meta,,1.1588230740} 2023-11-27 05:06:52,100 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-11-27 05:06:52,101 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-11-27 05:06:52,101 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-11-27 05:06:52,101 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-11-27 05:06:52,101 DEBUG [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1507): Waiting on 1588230740, 708094d2c6013f8353947ca009f33ef1 2023-11-27 05:06:52,101 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-11-27 05:06:52,102 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-11-27 05:06:52,105 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.37 KB heapSize=10.30 KB 2023-11-27 05:06:52,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/quota/be5ef4f3dfb2c43b447798061e19f02f/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-11-27 05:06:52,111 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 05:06:52,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be5ef4f3dfb2c43b447798061e19f02f: 2023-11-27 05:06:52,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1701061178843.be5ef4f3dfb2c43b447798061e19f02f. 2023-11-27 05:06:52,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=30 B at sequenceid=14 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/.tmp/info/c36ac28591244d329f554cd0072f367b 2023-11-27 05:06:52,127 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.92 KB at sequenceid=83 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/info/3a246948378645b4a26165647e968d3a 2023-11-27 05:06:52,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c36ac28591244d329f554cd0072f367b 2023-11-27 05:06:52,133 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a246948378645b4a26165647e968d3a 2023-11-27 05:06:52,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/.tmp/info/c36ac28591244d329f554cd0072f367b as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info/c36ac28591244d329f554cd0072f367b 2023-11-27 05:06:52,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c36ac28591244d329f554cd0072f367b 2023-11-27 05:06:52,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/info/c36ac28591244d329f554cd0072f367b, entries=1, sequenceid=14, filesize=4.9 K 2023-11-27 05:06:52,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~30 B/30, heapSize ~344 B/344, currentSize=0 B/0 for 708094d2c6013f8353947ca009f33ef1 in 48ms, sequenceid=14, compaction requested=false 2023-11-27 05:06:52,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-11-27 05:06:52,147 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=478 B at sequenceid=83 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/rep_barrier/a8d9842dd412467bb95681b2cf4b42c2 2023-11-27 05:06:52,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/namespace/708094d2c6013f8353947ca009f33ef1/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=10 2023-11-27 05:06:52,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:06:52,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 708094d2c6013f8353947ca009f33ef1: 2023-11-27 05:06:52,151 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1701061178129.708094d2c6013f8353947ca009f33ef1. 2023-11-27 05:06:52,154 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a8d9842dd412467bb95681b2cf4b42c2 2023-11-27 05:06:52,165 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1007 B at sequenceid=83 (bloomFilter=false), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/table/7c5a8b59631b4ec49c71939e14807f91 2023-11-27 05:06:52,170 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7c5a8b59631b4ec49c71939e14807f91 2023-11-27 05:06:52,170 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/info/3a246948378645b4a26165647e968d3a as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info/3a246948378645b4a26165647e968d3a 2023-11-27 05:06:52,176 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a246948378645b4a26165647e968d3a 2023-11-27 05:06:52,176 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/info/3a246948378645b4a26165647e968d3a, entries=10, sequenceid=83, filesize=5.7 K 2023-11-27 05:06:52,177 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/rep_barrier/a8d9842dd412467bb95681b2cf4b42c2 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/rep_barrier/a8d9842dd412467bb95681b2cf4b42c2 2023-11-27 05:06:52,182 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a8d9842dd412467bb95681b2cf4b42c2 2023-11-27 05:06:52,182 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/rep_barrier/a8d9842dd412467bb95681b2cf4b42c2, entries=5, sequenceid=83, filesize=5.3 K 2023-11-27 05:06:52,183 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/.tmp/table/7c5a8b59631b4ec49c71939e14807f91 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table/7c5a8b59631b4ec49c71939e14807f91 2023-11-27 05:06:52,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7c5a8b59631b4ec49c71939e14807f91 2023-11-27 05:06:52,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/table/7c5a8b59631b4ec49c71939e14807f91, entries=9, sequenceid=83, filesize=5.4 K 2023-11-27 05:06:52,190 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.37 KB/5502, heapSize ~10.25 KB/10496, currentSize=0 B/0 for 1588230740 in 87ms, sequenceid=83, compaction requested=false 2023-11-27 05:06:52,201 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/data/hbase/meta/1588230740/recovered.edits/86.seqid, newMaxSeqId=86, maxSeqId=39 2023-11-27 05:06:52,201 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-11-27 05:06:52,202 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-11-27 05:06:52,202 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-11-27 05:06:52,202 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-11-27 05:06:52,297 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1173): stopping server jenkins-hbase4.apache.org,41841,1701061176322; all regions closed. 2023-11-27 05:06:52,297 DEBUG [RS:1;jenkins-hbase4:41841] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-11-27 05:06:52,303 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1173): stopping server jenkins-hbase4.apache.org,41853,1701061176279; all regions closed. 2023-11-27 05:06:52,303 DEBUG [RS:0;jenkins-hbase4:41853] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-11-27 05:06:52,306 DEBUG [RS:1;jenkins-hbase4:41841] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs 2023-11-27 05:06:52,306 INFO [RS:1;jenkins-hbase4:41841] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41841%2C1701061176322.meta:.meta(num 1701061177889) 2023-11-27 05:06:52,311 DEBUG [RS:0;jenkins-hbase4:41853] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs 2023-11-27 05:06:52,311 INFO [RS:0;jenkins-hbase4:41853] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41853%2C1701061176279.meta:.meta(num 1701061483075) 2023-11-27 05:06:52,318 DEBUG [RS:1;jenkins-hbase4:41841] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs 2023-11-27 05:06:52,318 INFO [RS:1;jenkins-hbase4:41841] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41841%2C1701061176322:(num 1701061177683) 2023-11-27 05:06:52,318 DEBUG [RS:1;jenkins-hbase4:41841] ipc.AbstractRpcClient(489): Stopping rpc client 2023-11-27 05:06:52,318 INFO [RS:1;jenkins-hbase4:41841] regionserver.LeaseManager(133): Closed leases 2023-11-27 05:06:52,319 INFO [RS:1;jenkins-hbase4:41841] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-11-27 05:06:52,319 INFO [RS:1;jenkins-hbase4:41841] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-11-27 05:06:52,319 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-11-27 05:06:52,319 INFO [RS:1;jenkins-hbase4:41841] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-11-27 05:06:52,319 INFO [RS:1;jenkins-hbase4:41841] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-11-27 05:06:52,320 INFO [RS:1;jenkins-hbase4:41841] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41841 2023-11-27 05:06:52,321 DEBUG [RS:0;jenkins-hbase4:41853] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/oldWALs 2023-11-27 05:06:52,322 INFO [RS:0;jenkins-hbase4:41853] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41853%2C1701061176279:(num 1701061177683) 2023-11-27 05:06:52,322 DEBUG [RS:0;jenkins-hbase4:41853] ipc.AbstractRpcClient(489): Stopping rpc client 2023-11-27 05:06:52,322 INFO [RS:0;jenkins-hbase4:41853] regionserver.LeaseManager(133): Closed leases 2023-11-27 05:06:52,322 INFO [RS:0;jenkins-hbase4:41853] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-11-27 05:06:52,322 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-11-27 05:06:52,323 INFO [RS:0;jenkins-hbase4:41853] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41853 2023-11-27 05:06:52,331 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:06:52,331 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41853,1701061176279 2023-11-27 05:06:52,331 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-11-27 05:06:52,331 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-11-27 05:06:52,331 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-11-27 05:06:52,331 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:52,331 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41841,1701061176322 2023-11-27 05:06:52,333 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41841,1701061176322] 2023-11-27 05:06:52,333 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41841,1701061176322; numProcessing=1 2023-11-27 05:06:52,334 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41841,1701061176322 already deleted, retry=false 2023-11-27 05:06:52,334 INFO [RegionServerTracker-0] master.ServerManager(554): Cluster shutdown set; jenkins-hbase4.apache.org,41841,1701061176322 expired; onlineServers=1 2023-11-27 05:06:52,335 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41853,1701061176279] 2023-11-27 05:06:52,335 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41853,1701061176279; numProcessing=2 2023-11-27 05:06:52,336 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41853,1701061176279 already deleted, retry=false 2023-11-27 05:06:52,336 INFO [RegionServerTracker-0] master.ServerManager(554): Cluster shutdown set; jenkins-hbase4.apache.org,41853,1701061176279 expired; onlineServers=0 2023-11-27 05:06:52,336 INFO [RegionServerTracker-0] regionserver.HRegionServer(2300): ***** STOPPING region server 'jenkins-hbase4.apache.org,33323,1701061175121' ***** 2023-11-27 05:06:52,336 INFO [RegionServerTracker-0] regionserver.HRegionServer(2314): STOPPED: Cluster shutdown set; onlineServer=0 2023-11-27 05:06:52,337 DEBUG [M:0;jenkins-hbase4:33323] ipc.AbstractRpcClient(189): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5669940, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=true, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-11-27 05:06:52,337 INFO [M:0;jenkins-hbase4:33323] regionserver.HRegionServer(1147): stopping server jenkins-hbase4.apache.org,33323,1701061175121 2023-11-27 05:06:52,337 INFO [M:0;jenkins-hbase4:33323] regionserver.HRegionServer(1173): stopping server jenkins-hbase4.apache.org,33323,1701061175121; all regions closed. 2023-11-27 05:06:52,337 DEBUG [M:0;jenkins-hbase4:33323] ipc.AbstractRpcClient(489): Stopping rpc client 2023-11-27 05:06:52,337 DEBUG [M:0;jenkins-hbase4:33323] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-11-27 05:06:52,337 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-11-27 05:06:52,337 DEBUG [M:0;jenkins-hbase4:33323] cleaner.HFileCleaner(317): Stopping file delete threads 2023-11-27 05:06:52,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1701061177383] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1701061177383,5,FailOnTimeoutGroup] 2023-11-27 05:06:52,337 INFO [M:0;jenkins-hbase4:33323] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-11-27 05:06:52,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1701061177383] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1701061177383,5,FailOnTimeoutGroup] 2023-11-27 05:06:52,338 INFO [M:0;jenkins-hbase4:33323] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-11-27 05:06:52,338 INFO [M:0;jenkins-hbase4:33323] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-11-27 05:06:52,338 DEBUG [M:0;jenkins-hbase4:33323] master.HMaster(1512): Stopping service threads 2023-11-27 05:06:52,338 INFO [M:0;jenkins-hbase4:33323] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-11-27 05:06:52,338 ERROR [M:0;jenkins-hbase4:33323] procedure2.ProcedureExecutor(652): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[IPC Client (2009289703) connection to localhost/127.0.0.1:41015 from jenkins,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] Thread[HFileArchiver-9,5,PEWorkerGroup] Thread[HFileArchiver-10,5,PEWorkerGroup] Thread[HFileArchiver-11,5,PEWorkerGroup] Thread[HFileArchiver-12,5,PEWorkerGroup] 2023-11-27 05:06:52,339 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-11-27 05:06:52,339 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-11-27 05:06:52,339 INFO [M:0;jenkins-hbase4:33323] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-11-27 05:06:52,339 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-11-27 05:06:52,339 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(165): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-11-27 05:06:52,340 DEBUG [M:0;jenkins-hbase4:33323] zookeeper.ZKUtil(399): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-11-27 05:06:52,340 WARN [M:0;jenkins-hbase4:33323] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-11-27 05:06:52,340 INFO [M:0;jenkins-hbase4:33323] assignment.AssignmentManager(315): Stopping assignment manager 2023-11-27 05:06:52,340 INFO [M:0;jenkins-hbase4:33323] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-11-27 05:06:52,340 DEBUG [M:0;jenkins-hbase4:33323] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-11-27 05:06:52,340 INFO [M:0;jenkins-hbase4:33323] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 05:06:52,340 DEBUG [M:0;jenkins-hbase4:33323] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 05:06:52,340 DEBUG [M:0;jenkins-hbase4:33323] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-11-27 05:06:52,340 DEBUG [M:0;jenkins-hbase4:33323] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 05:06:52,341 INFO [M:0;jenkins-hbase4:33323] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=182.05 KB heapSize=221.84 KB 2023-11-27 05:06:52,353 INFO [M:0;jenkins-hbase4:33323] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=182.05 KB at sequenceid=464 (bloomFilter=true), to=hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3555674b63c344148a831ec6ad3f7925 2023-11-27 05:06:52,358 INFO [M:0;jenkins-hbase4:33323] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3555674b63c344148a831ec6ad3f7925 2023-11-27 05:06:52,359 DEBUG [M:0;jenkins-hbase4:33323] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/3555674b63c344148a831ec6ad3f7925 as hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3555674b63c344148a831ec6ad3f7925 2023-11-27 05:06:52,364 INFO [M:0;jenkins-hbase4:33323] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3555674b63c344148a831ec6ad3f7925 2023-11-27 05:06:52,364 INFO [M:0;jenkins-hbase4:33323] regionserver.HStore(1080): Added hdfs://localhost:41015/user/jenkins/test-data/88c861d8-82fa-b545-1769-68eb426e44d0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/3555674b63c344148a831ec6ad3f7925, entries=54, sequenceid=464, filesize=9.5 K 2023-11-27 05:06:52,365 INFO [M:0;jenkins-hbase4:33323] regionserver.HRegion(2948): Finished flush of dataSize ~182.05 KB/186415, heapSize ~221.82 KB/227144, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=464, compaction requested=false 2023-11-27 05:06:52,366 INFO [M:0;jenkins-hbase4:33323] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-11-27 05:06:52,366 DEBUG [M:0;jenkins-hbase4:33323] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-11-27 05:06:52,369 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-11-27 05:06:52,369 INFO [M:0;jenkins-hbase4:33323] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-11-27 05:06:52,369 INFO [M:0;jenkins-hbase4:33323] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33323 2023-11-27 05:06:52,371 DEBUG [M:0;jenkins-hbase4:33323] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33323,1701061175121 already deleted, retry=false 2023-11-27 05:06:52,433 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-11-27 05:06:52,433 INFO [RS:0;jenkins-hbase4:41853] regionserver.HRegionServer(1230): Exiting; stopping=jenkins-hbase4.apache.org,41853,1701061176279; zookeeper connection closed. 2023-11-27 05:06:52,433 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41853-0x1002d35880e0001, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-11-27 05:06:52,433 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1ad4ad36] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1ad4ad36 2023-11-27 05:06:52,434 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-11-27 05:06:52,434 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): regionserver:41841-0x1002d35880e0002, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-11-27 05:06:52,434 INFO [RS:1;jenkins-hbase4:41841] regionserver.HRegionServer(1230): Exiting; stopping=jenkins-hbase4.apache.org,41841,1701061176322; zookeeper connection closed. 2023-11-27 05:06:52,434 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@471ace3f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@471ace3f 2023-11-27 05:06:52,434 INFO [Listener at localhost/34689] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 2 regionserver(s) complete 2023-11-27 05:06:52,473 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-11-27 05:06:52,473 INFO [M:0;jenkins-hbase4:33323] regionserver.HRegionServer(1230): Exiting; stopping=jenkins-hbase4.apache.org,33323,1701061175121; zookeeper connection closed. 2023-11-27 05:06:52,473 DEBUG [Listener at localhost/34689-EventThread] zookeeper.ZKWatcher(600): master:33323-0x1002d35880e0000, quorum=127.0.0.1:50029, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-11-27 05:06:52,474 WARN [Listener at localhost/34689] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-11-27 05:06:52,477 INFO [Listener at localhost/34689] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-11-27 05:06:52,582 WARN [BP-1577092985-172.31.14.131-1701061172104 heartbeating to localhost/127.0.0.1:41015] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-11-27 05:06:52,582 WARN [BP-1577092985-172.31.14.131-1701061172104 heartbeating to localhost/127.0.0.1:41015] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1577092985-172.31.14.131-1701061172104 (Datanode Uuid bc6defe3-ddb7-41a8-9f9f-c1c82a0e91b4) service to localhost/127.0.0.1:41015 2023-11-27 05:06:52,584 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762/dfs/data/data3/current/BP-1577092985-172.31.14.131-1701061172104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-11-27 05:06:52,584 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762/dfs/data/data4/current/BP-1577092985-172.31.14.131-1701061172104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-11-27 05:06:52,584 WARN [Listener at localhost/34689] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-11-27 05:06:52,586 INFO [Listener at localhost/34689] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-11-27 05:06:52,690 WARN [BP-1577092985-172.31.14.131-1701061172104 heartbeating to localhost/127.0.0.1:41015] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-11-27 05:06:52,690 WARN [BP-1577092985-172.31.14.131-1701061172104 heartbeating to localhost/127.0.0.1:41015] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1577092985-172.31.14.131-1701061172104 (Datanode Uuid 2c11d29c-8eed-4c69-99ab-c1ad7d4faff2) service to localhost/127.0.0.1:41015 2023-11-27 05:06:52,691 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762/dfs/data/data1/current/BP-1577092985-172.31.14.131-1701061172104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-11-27 05:06:52,691 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/f5a53189-4093-05d0-d726-fe3db62bb765/cluster_add3a103-f85c-f802-b55b-e84b80447762/dfs/data/data2/current/BP-1577092985-172.31.14.131-1701061172104] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-11-27 05:06:52,725 INFO [Listener at localhost/34689] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-11-27 05:06:52,841 INFO [Listener at localhost/34689] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-11-27 05:06:52,888 INFO [Listener at localhost/34689] hbase.HBaseTestingUtility(1293): Minicluster is down