2023-07-23 10:14:14,774 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e 2023-07-23 10:14:14,793 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.coprocessor.example.TestWriteHeavyIncrementObserver timeout: 13 mins 2023-07-23 10:14:14,807 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 10:14:14,808 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541, deleteOnExit=true 2023-07-23 10:14:14,808 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 10:14:14,809 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/test.cache.data in system properties and HBase conf 2023-07-23 10:14:14,809 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 10:14:14,809 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir in system properties and HBase conf 2023-07-23 10:14:14,810 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 10:14:14,810 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 10:14:14,810 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 10:14:14,934 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-23 10:14:15,464 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 10:14:15,472 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 10:14:15,472 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 10:14:15,472 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 10:14:15,473 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 10:14:15,473 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 10:14:15,473 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 10:14:15,474 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 10:14:15,474 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 10:14:15,475 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 10:14:15,475 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/nfs.dump.dir in system properties and HBase conf 2023-07-23 10:14:15,476 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir in system properties and HBase conf 2023-07-23 10:14:15,476 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 10:14:15,476 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 10:14:15,476 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 10:14:16,100 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 10:14:16,104 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 10:14:16,398 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-23 10:14:16,592 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-23 10:14:16,616 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 10:14:16,653 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 10:14:16,695 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/Jetty_localhost_36221_hdfs____.xm2j6w/webapp 2023-07-23 10:14:16,872 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36221 2023-07-23 10:14:16,886 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 10:14:16,886 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 10:14:17,468 WARN [Listener at localhost/35371] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 10:14:17,576 WARN [Listener at localhost/35371] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 10:14:17,608 WARN [Listener at localhost/35371] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 10:14:17,621 INFO [Listener at localhost/35371] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 10:14:17,630 INFO [Listener at localhost/35371] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/Jetty_localhost_44895_datanode____8a596j/webapp 2023-07-23 10:14:17,903 INFO [Listener at localhost/35371] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44895 2023-07-23 10:14:18,400 WARN [Listener at localhost/32799] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 10:14:18,436 WARN [Listener at localhost/32799] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 10:14:18,440 WARN [Listener at localhost/32799] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 10:14:18,442 INFO [Listener at localhost/32799] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 10:14:18,448 INFO [Listener at localhost/32799] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/Jetty_localhost_44739_datanode____e3tv44/webapp 2023-07-23 10:14:18,559 INFO [Listener at localhost/32799] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44739 2023-07-23 10:14:18,582 WARN [Listener at localhost/33075] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 10:14:18,602 WARN [Listener at localhost/33075] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 10:14:18,605 WARN [Listener at localhost/33075] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 10:14:18,606 INFO [Listener at localhost/33075] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 10:14:18,611 INFO [Listener at localhost/33075] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/Jetty_localhost_38805_datanode____rdjxr3/webapp 2023-07-23 10:14:18,748 INFO [Listener at localhost/33075] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38805 2023-07-23 10:14:18,758 WARN [Listener at localhost/34007] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 10:14:19,070 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x13e4362b879752fb: Processing first storage report for DS-0d69eb77-4df2-40f8-9738-806cd12383c6 from datanode f3954b8f-2f0a-4be4-9cbc-88621755376b 2023-07-23 10:14:19,072 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x13e4362b879752fb: from storage DS-0d69eb77-4df2-40f8-9738-806cd12383c6 node DatanodeRegistration(127.0.0.1:41931, datanodeUuid=f3954b8f-2f0a-4be4-9cbc-88621755376b, infoPort=38631, infoSecurePort=0, ipcPort=32799, storageInfo=lv=-57;cid=testClusterID;nsid=920491092;c=1690107256171), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 10:14:19,072 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5527b1a4f2ae1a2f: Processing first storage report for DS-da856993-98d3-420b-99fe-91d4a0069f30 from datanode 761557c1-a685-43d1-9930-9fa03170d606 2023-07-23 10:14:19,072 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5527b1a4f2ae1a2f: from storage DS-da856993-98d3-420b-99fe-91d4a0069f30 node DatanodeRegistration(127.0.0.1:39323, datanodeUuid=761557c1-a685-43d1-9930-9fa03170d606, infoPort=34273, infoSecurePort=0, ipcPort=34007, storageInfo=lv=-57;cid=testClusterID;nsid=920491092;c=1690107256171), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 10:14:19,073 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9dd447e696ac6ba0: Processing first storage report for DS-13c40b39-86d5-4d57-984c-52cd2311c37b from datanode c55724ba-a955-4469-9ba1-0e7c2f75465a 2023-07-23 10:14:19,073 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9dd447e696ac6ba0: from storage DS-13c40b39-86d5-4d57-984c-52cd2311c37b node DatanodeRegistration(127.0.0.1:42717, datanodeUuid=c55724ba-a955-4469-9ba1-0e7c2f75465a, infoPort=37227, infoSecurePort=0, ipcPort=33075, storageInfo=lv=-57;cid=testClusterID;nsid=920491092;c=1690107256171), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 10:14:19,073 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x13e4362b879752fb: Processing first storage report for DS-37bd9da0-5ffe-4b00-8980-6042b93d40b0 from datanode f3954b8f-2f0a-4be4-9cbc-88621755376b 2023-07-23 10:14:19,073 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x13e4362b879752fb: from storage DS-37bd9da0-5ffe-4b00-8980-6042b93d40b0 node DatanodeRegistration(127.0.0.1:41931, datanodeUuid=f3954b8f-2f0a-4be4-9cbc-88621755376b, infoPort=38631, infoSecurePort=0, ipcPort=32799, storageInfo=lv=-57;cid=testClusterID;nsid=920491092;c=1690107256171), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 10:14:19,073 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5527b1a4f2ae1a2f: Processing first storage report for DS-8c6de2c7-0fbb-47d9-a08c-feef26403db8 from datanode 761557c1-a685-43d1-9930-9fa03170d606 2023-07-23 10:14:19,073 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5527b1a4f2ae1a2f: from storage DS-8c6de2c7-0fbb-47d9-a08c-feef26403db8 node DatanodeRegistration(127.0.0.1:39323, datanodeUuid=761557c1-a685-43d1-9930-9fa03170d606, infoPort=34273, infoSecurePort=0, ipcPort=34007, storageInfo=lv=-57;cid=testClusterID;nsid=920491092;c=1690107256171), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 10:14:19,074 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9dd447e696ac6ba0: Processing first storage report for DS-da1049f5-8e5a-4f91-933c-686bd5a94d42 from datanode c55724ba-a955-4469-9ba1-0e7c2f75465a 2023-07-23 10:14:19,074 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9dd447e696ac6ba0: from storage DS-da1049f5-8e5a-4f91-933c-686bd5a94d42 node DatanodeRegistration(127.0.0.1:42717, datanodeUuid=c55724ba-a955-4469-9ba1-0e7c2f75465a, infoPort=37227, infoSecurePort=0, ipcPort=33075, storageInfo=lv=-57;cid=testClusterID;nsid=920491092;c=1690107256171), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 10:14:19,206 DEBUG [Listener at localhost/34007] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e 2023-07-23 10:14:19,305 INFO [Listener at localhost/34007] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/zookeeper_0, clientPort=60205, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 10:14:19,321 INFO [Listener at localhost/34007] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60205 2023-07-23 10:14:19,333 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:19,336 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:20,003 INFO [Listener at localhost/34007] util.FSUtils(471): Created version file at hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d with version=8 2023-07-23 10:14:20,003 INFO [Listener at localhost/34007] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/hbase-staging 2023-07-23 10:14:20,020 DEBUG [Listener at localhost/34007] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 10:14:20,020 DEBUG [Listener at localhost/34007] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 10:14:20,020 DEBUG [Listener at localhost/34007] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 10:14:20,021 DEBUG [Listener at localhost/34007] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 10:14:20,366 INFO [Listener at localhost/34007] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-23 10:14:20,930 INFO [Listener at localhost/34007] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 10:14:20,968 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:20,969 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:20,970 INFO [Listener at localhost/34007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 10:14:20,970 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:20,970 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 10:14:21,144 INFO [Listener at localhost/34007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 10:14:21,223 DEBUG [Listener at localhost/34007] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-23 10:14:21,320 INFO [Listener at localhost/34007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34669 2023-07-23 10:14:21,332 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:21,334 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:21,359 INFO [Listener at localhost/34007] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34669 connecting to ZooKeeper ensemble=127.0.0.1:60205 2023-07-23 10:14:21,407 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:346690x0, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 10:14:21,410 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34669-0x10191acac940000 connected 2023-07-23 10:14:21,441 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 10:14:21,442 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:21,446 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 10:14:21,455 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34669 2023-07-23 10:14:21,456 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34669 2023-07-23 10:14:21,456 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34669 2023-07-23 10:14:21,457 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34669 2023-07-23 10:14:21,458 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34669 2023-07-23 10:14:21,494 INFO [Listener at localhost/34007] log.Log(170): Logging initialized @7552ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-23 10:14:21,637 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 10:14:21,638 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 10:14:21,639 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 10:14:21,641 INFO [Listener at localhost/34007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 10:14:21,641 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 10:14:21,641 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 10:14:21,644 INFO [Listener at localhost/34007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 10:14:21,715 INFO [Listener at localhost/34007] http.HttpServer(1146): Jetty bound to port 36493 2023-07-23 10:14:21,717 INFO [Listener at localhost/34007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 10:14:21,764 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:21,768 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d32a63c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,AVAILABLE} 2023-07-23 10:14:21,769 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:21,769 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@cea58e2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 10:14:21,950 INFO [Listener at localhost/34007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 10:14:21,967 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 10:14:21,967 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 10:14:21,970 INFO [Listener at localhost/34007] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 10:14:21,979 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,008 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@42024fc3{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/jetty-0_0_0_0-36493-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3567381058392835977/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 10:14:22,024 INFO [Listener at localhost/34007] server.AbstractConnector(333): Started ServerConnector@6f4a5cb0{HTTP/1.1, (http/1.1)}{0.0.0.0:36493} 2023-07-23 10:14:22,024 INFO [Listener at localhost/34007] server.Server(415): Started @8082ms 2023-07-23 10:14:22,027 INFO [Listener at localhost/34007] master.HMaster(444): hbase.rootdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d, hbase.cluster.distributed=false 2023-07-23 10:14:22,104 INFO [Listener at localhost/34007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 10:14:22,104 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,104 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,104 INFO [Listener at localhost/34007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 10:14:22,105 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,105 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 10:14:22,110 INFO [Listener at localhost/34007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 10:14:22,113 INFO [Listener at localhost/34007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46313 2023-07-23 10:14:22,115 INFO [Listener at localhost/34007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 10:14:22,123 DEBUG [Listener at localhost/34007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 10:14:22,124 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:22,125 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:22,127 INFO [Listener at localhost/34007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46313 connecting to ZooKeeper ensemble=127.0.0.1:60205 2023-07-23 10:14:22,131 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:463130x0, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 10:14:22,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46313-0x10191acac940001 connected 2023-07-23 10:14:22,132 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 10:14:22,134 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:22,135 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 10:14:22,135 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46313 2023-07-23 10:14:22,136 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46313 2023-07-23 10:14:22,136 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46313 2023-07-23 10:14:22,137 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46313 2023-07-23 10:14:22,137 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46313 2023-07-23 10:14:22,140 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 10:14:22,140 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 10:14:22,141 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 10:14:22,142 INFO [Listener at localhost/34007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 10:14:22,142 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 10:14:22,142 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 10:14:22,142 INFO [Listener at localhost/34007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 10:14:22,144 INFO [Listener at localhost/34007] http.HttpServer(1146): Jetty bound to port 38619 2023-07-23 10:14:22,145 INFO [Listener at localhost/34007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 10:14:22,152 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,152 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d4846ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,AVAILABLE} 2023-07-23 10:14:22,152 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,153 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34b1c900{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 10:14:22,285 INFO [Listener at localhost/34007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 10:14:22,287 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 10:14:22,287 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 10:14:22,288 INFO [Listener at localhost/34007] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 10:14:22,289 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,293 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@64f60122{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/jetty-0_0_0_0-38619-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5558767496327524375/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 10:14:22,294 INFO [Listener at localhost/34007] server.AbstractConnector(333): Started ServerConnector@380b280{HTTP/1.1, (http/1.1)}{0.0.0.0:38619} 2023-07-23 10:14:22,294 INFO [Listener at localhost/34007] server.Server(415): Started @8352ms 2023-07-23 10:14:22,308 INFO [Listener at localhost/34007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 10:14:22,308 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,308 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,309 INFO [Listener at localhost/34007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 10:14:22,309 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,309 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 10:14:22,309 INFO [Listener at localhost/34007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 10:14:22,311 INFO [Listener at localhost/34007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46561 2023-07-23 10:14:22,312 INFO [Listener at localhost/34007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 10:14:22,312 DEBUG [Listener at localhost/34007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 10:14:22,313 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:22,315 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:22,316 INFO [Listener at localhost/34007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46561 connecting to ZooKeeper ensemble=127.0.0.1:60205 2023-07-23 10:14:22,319 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:465610x0, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 10:14:22,320 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46561-0x10191acac940002 connected 2023-07-23 10:14:22,320 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 10:14:22,321 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:22,322 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 10:14:22,322 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46561 2023-07-23 10:14:22,323 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46561 2023-07-23 10:14:22,326 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46561 2023-07-23 10:14:22,328 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46561 2023-07-23 10:14:22,328 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46561 2023-07-23 10:14:22,331 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 10:14:22,331 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 10:14:22,331 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 10:14:22,331 INFO [Listener at localhost/34007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 10:14:22,332 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 10:14:22,332 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 10:14:22,332 INFO [Listener at localhost/34007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 10:14:22,333 INFO [Listener at localhost/34007] http.HttpServer(1146): Jetty bound to port 44649 2023-07-23 10:14:22,333 INFO [Listener at localhost/34007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 10:14:22,339 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,339 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@642a45f6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,AVAILABLE} 2023-07-23 10:14:22,339 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,339 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@704fb744{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 10:14:22,467 INFO [Listener at localhost/34007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 10:14:22,468 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 10:14:22,468 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 10:14:22,469 INFO [Listener at localhost/34007] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 10:14:22,470 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,471 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@12fc3501{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/jetty-0_0_0_0-44649-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4854156169799273161/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 10:14:22,473 INFO [Listener at localhost/34007] server.AbstractConnector(333): Started ServerConnector@2a761dcc{HTTP/1.1, (http/1.1)}{0.0.0.0:44649} 2023-07-23 10:14:22,473 INFO [Listener at localhost/34007] server.Server(415): Started @8531ms 2023-07-23 10:14:22,489 INFO [Listener at localhost/34007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 10:14:22,489 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,489 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,490 INFO [Listener at localhost/34007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 10:14:22,490 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 10:14:22,490 INFO [Listener at localhost/34007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 10:14:22,490 INFO [Listener at localhost/34007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 10:14:22,492 INFO [Listener at localhost/34007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45649 2023-07-23 10:14:22,492 INFO [Listener at localhost/34007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 10:14:22,495 DEBUG [Listener at localhost/34007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 10:14:22,496 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:22,498 INFO [Listener at localhost/34007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:22,500 INFO [Listener at localhost/34007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45649 connecting to ZooKeeper ensemble=127.0.0.1:60205 2023-07-23 10:14:22,504 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:456490x0, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 10:14:22,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45649-0x10191acac940003 connected 2023-07-23 10:14:22,506 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 10:14:22,506 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:22,507 DEBUG [Listener at localhost/34007] zookeeper.ZKUtil(164): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 10:14:22,508 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45649 2023-07-23 10:14:22,510 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45649 2023-07-23 10:14:22,512 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45649 2023-07-23 10:14:22,513 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45649 2023-07-23 10:14:22,513 DEBUG [Listener at localhost/34007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45649 2023-07-23 10:14:22,516 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 10:14:22,516 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 10:14:22,517 INFO [Listener at localhost/34007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 10:14:22,517 INFO [Listener at localhost/34007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 10:14:22,517 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 10:14:22,518 INFO [Listener at localhost/34007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 10:14:22,518 INFO [Listener at localhost/34007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 10:14:22,519 INFO [Listener at localhost/34007] http.HttpServer(1146): Jetty bound to port 36635 2023-07-23 10:14:22,519 INFO [Listener at localhost/34007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 10:14:22,521 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,522 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5dabd4cf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,AVAILABLE} 2023-07-23 10:14:22,522 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,523 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7deee04d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 10:14:22,651 INFO [Listener at localhost/34007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 10:14:22,652 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 10:14:22,652 INFO [Listener at localhost/34007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 10:14:22,652 INFO [Listener at localhost/34007] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 10:14:22,656 INFO [Listener at localhost/34007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 10:14:22,657 INFO [Listener at localhost/34007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@56f29558{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/java.io.tmpdir/jetty-0_0_0_0-36635-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7182081686341719591/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 10:14:22,659 INFO [Listener at localhost/34007] server.AbstractConnector(333): Started ServerConnector@45900881{HTTP/1.1, (http/1.1)}{0.0.0.0:36635} 2023-07-23 10:14:22,659 INFO [Listener at localhost/34007] server.Server(415): Started @8717ms 2023-07-23 10:14:22,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 10:14:22,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3146e66c{HTTP/1.1, (http/1.1)}{0.0.0.0:36921} 2023-07-23 10:14:22,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8746ms 2023-07-23 10:14:22,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:22,699 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 10:14:22,700 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:22,719 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 10:14:22,719 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 10:14:22,720 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:22,719 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 10:14:22,719 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 10:14:22,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 10:14:22,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34669,1690107260184 from backup master directory 2023-07-23 10:14:22,724 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 10:14:22,728 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:22,728 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 10:14:22,729 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 10:14:22,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:22,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-23 10:14:22,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-23 10:14:22,841 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/hbase.id with ID: 5a4f8688-5b45-40db-a6cf-db13f7c5dcba 2023-07-23 10:14:22,894 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 10:14:22,913 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:22,968 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7fe65171 to 127.0.0.1:60205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 10:14:22,999 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@70653493, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 10:14:23,026 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 10:14:23,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 10:14:23,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-23 10:14:23,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-23 10:14:23,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 10:14:23,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 10:14:23,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 10:14:23,113 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store-tmp 2023-07-23 10:14:23,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:23,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 10:14:23,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:23,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:23,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 10:14:23,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:23,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:23,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 10:14:23,171 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/WALs/jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:23,198 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34669%2C1690107260184, suffix=, logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/WALs/jenkins-hbase4.apache.org,34669,1690107260184, archiveDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/oldWALs, maxLogs=10 2023-07-23 10:14:23,265 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK] 2023-07-23 10:14:23,265 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK] 2023-07-23 10:14:23,265 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK] 2023-07-23 10:14:23,274 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 10:14:23,342 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/WALs/jenkins-hbase4.apache.org,34669,1690107260184/jenkins-hbase4.apache.org%2C34669%2C1690107260184.1690107263209 2023-07-23 10:14:23,343 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK], DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK], DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK]] 2023-07-23 10:14:23,343 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 10:14:23,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:23,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 10:14:23,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 10:14:23,433 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 10:14:23,440 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 10:14:23,480 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 10:14:23,497 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:23,503 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 10:14:23,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 10:14:23,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 10:14:23,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 10:14:23,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11972363200, jitterRate=0.11501321196556091}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 10:14:23,553 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 10:14:23,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 10:14:23,621 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 10:14:23,621 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 10:14:23,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 10:14:23,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 3 msec 2023-07-23 10:14:23,689 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 59 msec 2023-07-23 10:14:23,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 10:14:23,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 10:14:23,748 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 10:14:23,798 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 10:14:23,811 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 10:14:23,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 10:14:23,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 10:14:23,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 10:14:23,842 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:23,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 10:14:23,846 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 10:14:23,867 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 10:14:23,874 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:23,874 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:23,875 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:23,874 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:23,874 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:23,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34669,1690107260184, sessionid=0x10191acac940000, setting cluster-up flag (Was=false) 2023-07-23 10:14:23,914 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:23,926 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 10:14:23,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:23,936 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:23,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 10:14:23,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:23,951 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.hbase-snapshot/.tmp 2023-07-23 10:14:23,967 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(951): ClusterId : 5a4f8688-5b45-40db-a6cf-db13f7c5dcba 2023-07-23 10:14:23,968 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(951): ClusterId : 5a4f8688-5b45-40db-a6cf-db13f7c5dcba 2023-07-23 10:14:23,978 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(951): ClusterId : 5a4f8688-5b45-40db-a6cf-db13f7c5dcba 2023-07-23 10:14:23,989 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 10:14:23,989 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 10:14:23,989 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 10:14:23,998 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 10:14:23,998 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 10:14:23,999 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 10:14:23,999 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 10:14:24,000 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 10:14:24,000 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 10:14:24,005 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 10:14:24,006 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 10:14:24,006 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 10:14:24,011 DEBUG [RS:1;jenkins-hbase4:46561] zookeeper.ReadOnlyZKClient(139): Connect 0x6ea5b49a to 127.0.0.1:60205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 10:14:24,011 DEBUG [RS:0;jenkins-hbase4:46313] zookeeper.ReadOnlyZKClient(139): Connect 0x499f79fa to 127.0.0.1:60205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 10:14:24,011 DEBUG [RS:2;jenkins-hbase4:45649] zookeeper.ReadOnlyZKClient(139): Connect 0x5f33889d to 127.0.0.1:60205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 10:14:24,039 DEBUG [RS:1;jenkins-hbase4:46561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6062473e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 10:14:24,051 DEBUG [RS:1;jenkins-hbase4:46561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@645ddfea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 10:14:24,052 DEBUG [RS:2;jenkins-hbase4:45649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@287ba1ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 10:14:24,052 DEBUG [RS:2;jenkins-hbase4:45649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@278a88f1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 10:14:24,054 DEBUG [RS:0;jenkins-hbase4:46313] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@100f4ee2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 10:14:24,055 DEBUG [RS:0;jenkins-hbase4:46313] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@75464144, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 10:14:24,092 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:45649 2023-07-23 10:14:24,093 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46313 2023-07-23 10:14:24,111 INFO [RS:0;jenkins-hbase4:46313] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 10:14:24,111 INFO [RS:0;jenkins-hbase4:46313] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 10:14:24,111 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 10:14:24,119 INFO [RS:2;jenkins-hbase4:45649] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 10:14:24,123 INFO [RS:2;jenkins-hbase4:45649] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 10:14:24,124 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 10:14:24,126 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34669,1690107260184 with isa=jenkins-hbase4.apache.org/172.31.14.131:45649, startcode=1690107262489 2023-07-23 10:14:24,126 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34669,1690107260184 with isa=jenkins-hbase4.apache.org/172.31.14.131:46313, startcode=1690107262103 2023-07-23 10:14:24,129 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46561 2023-07-23 10:14:24,129 INFO [RS:1;jenkins-hbase4:46561] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 10:14:24,129 INFO [RS:1;jenkins-hbase4:46561] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 10:14:24,129 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 10:14:24,130 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34669,1690107260184 with isa=jenkins-hbase4.apache.org/172.31.14.131:46561, startcode=1690107262307 2023-07-23 10:14:24,159 DEBUG [RS:2;jenkins-hbase4:45649] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 10:14:24,159 DEBUG [RS:0;jenkins-hbase4:46313] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 10:14:24,159 DEBUG [RS:1;jenkins-hbase4:46561] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 10:14:24,235 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 10:14:24,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 10:14:24,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 10:14:24,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 10:14:24,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 10:14:24,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 10:14:24,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 10:14:24,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,252 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39183, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 10:14:24,252 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46447, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 10:14:24,252 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46019, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 10:14:24,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690107294284 2023-07-23 10:14:24,287 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 10:14:24,292 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34669] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:24,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 10:14:24,299 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 10:14:24,300 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 10:14:24,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 10:14:24,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 10:14:24,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 10:14:24,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 10:14:24,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,305 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34669] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:24,306 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34669] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:24,308 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 10:14:24,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 10:14:24,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 10:14:24,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 10:14:24,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 10:14:24,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 10:14:24,337 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 10:14:24,339 WARN [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 10:14:24,337 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 10:14:24,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690107264330,5,FailOnTimeoutGroup] 2023-07-23 10:14:24,338 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 10:14:24,339 WARN [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 10:14:24,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690107264339,5,FailOnTimeoutGroup] 2023-07-23 10:14:24,339 WARN [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 10:14:24,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 10:14:24,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,425 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 10:14:24,427 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 10:14:24,427 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d 2023-07-23 10:14:24,440 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34669,1690107260184 with isa=jenkins-hbase4.apache.org/172.31.14.131:45649, startcode=1690107262489 2023-07-23 10:14:24,440 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34669,1690107260184 with isa=jenkins-hbase4.apache.org/172.31.14.131:46313, startcode=1690107262103 2023-07-23 10:14:24,440 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34669,1690107260184 with isa=jenkins-hbase4.apache.org/172.31.14.131:46561, startcode=1690107262307 2023-07-23 10:14:24,453 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34669] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,464 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34669] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,466 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d 2023-07-23 10:14:24,466 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34669] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,466 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35371 2023-07-23 10:14:24,466 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36493 2023-07-23 10:14:24,467 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d 2023-07-23 10:14:24,467 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35371 2023-07-23 10:14:24,467 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36493 2023-07-23 10:14:24,468 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d 2023-07-23 10:14:24,468 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35371 2023-07-23 10:14:24,468 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36493 2023-07-23 10:14:24,469 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:24,473 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 10:14:24,475 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 10:14:24,475 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/info 2023-07-23 10:14:24,476 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 10:14:24,477 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:24,477 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 10:14:24,479 DEBUG [RS:1;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,479 WARN [RS:1;jenkins-hbase4:46561] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 10:14:24,479 INFO [RS:1;jenkins-hbase4:46561] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 10:14:24,479 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,480 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46561,1690107262307] 2023-07-23 10:14:24,480 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46313,1690107262103] 2023-07-23 10:14:24,480 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45649,1690107262489] 2023-07-23 10:14:24,480 DEBUG [RS:0;jenkins-hbase4:46313] zookeeper.ZKUtil(162): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,480 DEBUG [RS:2;jenkins-hbase4:45649] zookeeper.ZKUtil(162): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,480 WARN [RS:0;jenkins-hbase4:46313] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 10:14:24,480 WARN [RS:2;jenkins-hbase4:45649] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 10:14:24,482 INFO [RS:2;jenkins-hbase4:45649] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 10:14:24,481 INFO [RS:0;jenkins-hbase4:46313] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 10:14:24,482 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,483 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,486 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/rep_barrier 2023-07-23 10:14:24,487 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 10:14:24,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:24,490 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 10:14:24,493 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/table 2023-07-23 10:14:24,494 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 10:14:24,495 DEBUG [RS:1;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,496 DEBUG [RS:1;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,497 DEBUG [RS:1;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,497 DEBUG [RS:0;jenkins-hbase4:46313] zookeeper.ZKUtil(162): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,497 DEBUG [RS:2;jenkins-hbase4:45649] zookeeper.ZKUtil(162): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,498 DEBUG [RS:2;jenkins-hbase4:45649] zookeeper.ZKUtil(162): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,498 DEBUG [RS:0;jenkins-hbase4:46313] zookeeper.ZKUtil(162): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,499 DEBUG [RS:2;jenkins-hbase4:45649] zookeeper.ZKUtil(162): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,499 DEBUG [RS:0;jenkins-hbase4:46313] zookeeper.ZKUtil(162): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,500 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:24,501 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740 2023-07-23 10:14:24,502 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740 2023-07-23 10:14:24,514 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-07-23 10:14:24,516 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 10:14:24,516 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 10:14:24,516 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 10:14:24,518 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 10:14:24,524 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 10:14:24,525 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11292270880, jitterRate=0.051674678921699524}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-07-23 10:14:24,525 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 10:14:24,525 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 10:14:24,525 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 10:14:24,525 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 10:14:24,526 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 10:14:24,526 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 10:14:24,529 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 10:14:24,529 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 10:14:24,532 INFO [RS:1;jenkins-hbase4:46561] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 10:14:24,533 INFO [RS:2;jenkins-hbase4:45649] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 10:14:24,534 INFO [RS:0;jenkins-hbase4:46313] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 10:14:24,537 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 10:14:24,537 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 10:14:24,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 10:14:24,561 INFO [RS:0;jenkins-hbase4:46313] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 10:14:24,561 INFO [RS:2;jenkins-hbase4:45649] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 10:14:24,563 INFO [RS:1;jenkins-hbase4:46561] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 10:14:24,568 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 10:14:24,569 INFO [RS:2;jenkins-hbase4:45649] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 10:14:24,569 INFO [RS:0;jenkins-hbase4:46313] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 10:14:24,569 INFO [RS:1;jenkins-hbase4:46561] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 10:14:24,570 INFO [RS:0;jenkins-hbase4:46313] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,571 INFO [RS:1;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,570 INFO [RS:2;jenkins-hbase4:45649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,571 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 10:14:24,572 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 10:14:24,573 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 10:14:24,577 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 10:14:24,583 INFO [RS:2;jenkins-hbase4:45649] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,583 INFO [RS:1;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,583 INFO [RS:0;jenkins-hbase4:46313] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,583 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,583 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 10:14:24,584 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 10:14:24,584 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,584 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:2;jenkins-hbase4:45649] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:0;jenkins-hbase4:46313] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,585 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 10:14:24,585 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,586 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,586 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,586 DEBUG [RS:1;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 10:14:24,594 INFO [RS:0;jenkins-hbase4:46313] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,595 INFO [RS:0;jenkins-hbase4:46313] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,595 INFO [RS:0;jenkins-hbase4:46313] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,598 INFO [RS:1;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,598 INFO [RS:1;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,598 INFO [RS:1;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,601 INFO [RS:2;jenkins-hbase4:45649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,602 INFO [RS:2;jenkins-hbase4:45649] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,602 INFO [RS:2;jenkins-hbase4:45649] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,622 INFO [RS:1;jenkins-hbase4:46561] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 10:14:24,622 INFO [RS:2;jenkins-hbase4:45649] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 10:14:24,622 INFO [RS:0;jenkins-hbase4:46313] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 10:14:24,626 INFO [RS:2;jenkins-hbase4:45649] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45649,1690107262489-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,626 INFO [RS:1;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46561,1690107262307-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,626 INFO [RS:0;jenkins-hbase4:46313] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46313,1690107262103-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:24,646 INFO [RS:0;jenkins-hbase4:46313] regionserver.Replication(203): jenkins-hbase4.apache.org,46313,1690107262103 started 2023-07-23 10:14:24,646 INFO [RS:2;jenkins-hbase4:45649] regionserver.Replication(203): jenkins-hbase4.apache.org,45649,1690107262489 started 2023-07-23 10:14:24,646 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46313,1690107262103, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46313, sessionid=0x10191acac940001 2023-07-23 10:14:24,647 INFO [RS:1;jenkins-hbase4:46561] regionserver.Replication(203): jenkins-hbase4.apache.org,46561,1690107262307 started 2023-07-23 10:14:24,646 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45649,1690107262489, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45649, sessionid=0x10191acac940003 2023-07-23 10:14:24,647 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 10:14:24,647 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 10:14:24,647 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46561,1690107262307, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46561, sessionid=0x10191acac940002 2023-07-23 10:14:24,647 DEBUG [RS:2;jenkins-hbase4:45649] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,647 DEBUG [RS:0;jenkins-hbase4:46313] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,648 DEBUG [RS:2;jenkins-hbase4:45649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45649,1690107262489' 2023-07-23 10:14:24,648 DEBUG [RS:0;jenkins-hbase4:46313] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46313,1690107262103' 2023-07-23 10:14:24,648 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 10:14:24,649 DEBUG [RS:0;jenkins-hbase4:46313] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 10:14:24,649 DEBUG [RS:2;jenkins-hbase4:45649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 10:14:24,649 DEBUG [RS:1;jenkins-hbase4:46561] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,649 DEBUG [RS:1;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46561,1690107262307' 2023-07-23 10:14:24,649 DEBUG [RS:1;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 10:14:24,650 DEBUG [RS:2;jenkins-hbase4:45649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 10:14:24,650 DEBUG [RS:0;jenkins-hbase4:46313] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 10:14:24,650 DEBUG [RS:1;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 10:14:24,651 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 10:14:24,651 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 10:14:24,651 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 10:14:24,651 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 10:14:24,651 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 10:14:24,651 DEBUG [RS:0;jenkins-hbase4:46313] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:24,651 DEBUG [RS:1;jenkins-hbase4:46561] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:24,652 DEBUG [RS:1;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46561,1690107262307' 2023-07-23 10:14:24,652 DEBUG [RS:1;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 10:14:24,651 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 10:14:24,652 DEBUG [RS:2;jenkins-hbase4:45649] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,652 DEBUG [RS:2;jenkins-hbase4:45649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45649,1690107262489' 2023-07-23 10:14:24,652 DEBUG [RS:2;jenkins-hbase4:45649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 10:14:24,652 DEBUG [RS:0;jenkins-hbase4:46313] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46313,1690107262103' 2023-07-23 10:14:24,652 DEBUG [RS:0;jenkins-hbase4:46313] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 10:14:24,652 DEBUG [RS:1;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 10:14:24,653 DEBUG [RS:2;jenkins-hbase4:45649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 10:14:24,653 DEBUG [RS:0;jenkins-hbase4:46313] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 10:14:24,653 DEBUG [RS:1;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 10:14:24,653 INFO [RS:1;jenkins-hbase4:46561] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 10:14:24,654 INFO [RS:1;jenkins-hbase4:46561] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 10:14:24,654 DEBUG [RS:2;jenkins-hbase4:45649] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 10:14:24,654 DEBUG [RS:0;jenkins-hbase4:46313] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 10:14:24,654 INFO [RS:0;jenkins-hbase4:46313] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 10:14:24,654 INFO [RS:2;jenkins-hbase4:45649] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 10:14:24,656 INFO [RS:2;jenkins-hbase4:45649] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 10:14:24,655 INFO [RS:0;jenkins-hbase4:46313] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 10:14:24,725 DEBUG [jenkins-hbase4:34669] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 10:14:24,730 DEBUG [jenkins-hbase4:34669] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 10:14:24,737 DEBUG [jenkins-hbase4:34669] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 10:14:24,737 DEBUG [jenkins-hbase4:34669] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 10:14:24,737 DEBUG [jenkins-hbase4:34669] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 10:14:24,737 DEBUG [jenkins-hbase4:34669] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 10:14:24,741 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45649,1690107262489, state=OPENING 2023-07-23 10:14:24,752 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 10:14:24,754 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:24,755 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 10:14:24,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45649,1690107262489}] 2023-07-23 10:14:24,771 INFO [RS:1;jenkins-hbase4:46561] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46561%2C1690107262307, suffix=, logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,46561,1690107262307, archiveDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs, maxLogs=32 2023-07-23 10:14:24,771 INFO [RS:0;jenkins-hbase4:46313] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46313%2C1690107262103, suffix=, logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,46313,1690107262103, archiveDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs, maxLogs=32 2023-07-23 10:14:24,774 INFO [RS:2;jenkins-hbase4:45649] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45649%2C1690107262489, suffix=, logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,45649,1690107262489, archiveDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs, maxLogs=32 2023-07-23 10:14:24,814 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK] 2023-07-23 10:14:24,814 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK] 2023-07-23 10:14:24,815 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK] 2023-07-23 10:14:24,816 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK] 2023-07-23 10:14:24,819 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK] 2023-07-23 10:14:24,820 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK] 2023-07-23 10:14:24,823 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK] 2023-07-23 10:14:24,822 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK] 2023-07-23 10:14:24,825 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK] 2023-07-23 10:14:24,833 INFO [RS:2;jenkins-hbase4:45649] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,45649,1690107262489/jenkins-hbase4.apache.org%2C45649%2C1690107262489.1690107264777 2023-07-23 10:14:24,833 INFO [RS:1;jenkins-hbase4:46561] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,46561,1690107262307/jenkins-hbase4.apache.org%2C46561%2C1690107262307.1690107264775 2023-07-23 10:14:24,833 DEBUG [RS:2;jenkins-hbase4:45649] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK], DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK], DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK]] 2023-07-23 10:14:24,834 INFO [RS:0;jenkins-hbase4:46313] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,46313,1690107262103/jenkins-hbase4.apache.org%2C46313%2C1690107262103.1690107264775 2023-07-23 10:14:24,837 DEBUG [RS:1;jenkins-hbase4:46561] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK], DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK], DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK]] 2023-07-23 10:14:24,837 DEBUG [RS:0;jenkins-hbase4:46313] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK], DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK], DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK]] 2023-07-23 10:14:24,952 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:24,956 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 10:14:24,961 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49438, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 10:14:24,975 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 10:14:24,976 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 10:14:24,979 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45649%2C1690107262489.meta, suffix=.meta, logDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,45649,1690107262489, archiveDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs, maxLogs=32 2023-07-23 10:14:25,003 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK] 2023-07-23 10:14:25,004 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK] 2023-07-23 10:14:25,007 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK] 2023-07-23 10:14:25,023 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,45649,1690107262489/jenkins-hbase4.apache.org%2C45649%2C1690107262489.meta.1690107264981.meta 2023-07-23 10:14:25,027 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41931,DS-0d69eb77-4df2-40f8-9738-806cd12383c6,DISK], DatanodeInfoWithStorage[127.0.0.1:42717,DS-13c40b39-86d5-4d57-984c-52cd2311c37b,DISK], DatanodeInfoWithStorage[127.0.0.1:39323,DS-da856993-98d3-420b-99fe-91d4a0069f30,DISK]] 2023-07-23 10:14:25,027 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 10:14:25,029 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 10:14:25,048 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 10:14:25,053 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 10:14:25,060 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 10:14:25,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:25,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 10:14:25,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 10:14:25,067 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 10:14:25,069 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/info 2023-07-23 10:14:25,069 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/info 2023-07-23 10:14:25,070 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 10:14:25,071 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:25,071 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 10:14:25,072 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/rep_barrier 2023-07-23 10:14:25,073 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/rep_barrier 2023-07-23 10:14:25,073 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 10:14:25,074 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:25,074 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 10:14:25,076 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/table 2023-07-23 10:14:25,076 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/table 2023-07-23 10:14:25,076 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 10:14:25,077 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:25,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740 2023-07-23 10:14:25,081 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740 2023-07-23 10:14:25,085 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-07-23 10:14:25,087 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 10:14:25,089 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11674224480, jitterRate=0.08724687993526459}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-07-23 10:14:25,089 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 10:14:25,100 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690107264942 2023-07-23 10:14:25,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 10:14:25,121 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 10:14:25,122 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45649,1690107262489, state=OPEN 2023-07-23 10:14:25,125 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 10:14:25,125 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 10:14:25,133 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 10:14:25,133 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45649,1690107262489 in 366 msec 2023-07-23 10:14:25,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 10:14:25,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 587 msec 2023-07-23 10:14:25,146 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0580 sec 2023-07-23 10:14:25,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690107265146, completionTime=-1 2023-07-23 10:14:25,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 10:14:25,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 10:14:25,222 DEBUG [hconnection-0x6be00614-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 10:14:25,225 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49440, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 10:14:25,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 10:14:25,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690107325245 2023-07-23 10:14:25,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690107385245 2023-07-23 10:14:25,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 98 msec 2023-07-23 10:14:25,270 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34669,1690107260184-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:25,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34669,1690107260184-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:25,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34669,1690107260184-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:25,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34669, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:25,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:25,280 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 10:14:25,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 10:14:25,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 10:14:25,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 10:14:25,344 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 10:14:25,346 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 10:14:25,369 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/hbase/namespace/063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,373 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/hbase/namespace/063141635a1fa2d615b283545d656db0 empty. 2023-07-23 10:14:25,374 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/hbase/namespace/063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,374 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 10:14:25,416 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 10:14:25,418 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 063141635a1fa2d615b283545d656db0, NAME => 'hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp 2023-07-23 10:14:25,437 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:25,437 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 063141635a1fa2d615b283545d656db0, disabling compactions & flushes 2023-07-23 10:14:25,437 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:25,438 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:25,438 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. after waiting 0 ms 2023-07-23 10:14:25,438 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:25,438 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:25,438 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 063141635a1fa2d615b283545d656db0: 2023-07-23 10:14:25,443 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 10:14:25,462 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690107265446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690107265446"}]},"ts":"1690107265446"} 2023-07-23 10:14:25,503 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 10:14:25,506 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 10:14:25,512 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690107265506"}]},"ts":"1690107265506"} 2023-07-23 10:14:25,518 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 10:14:25,525 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 10:14:25,526 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 10:14:25,526 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 10:14:25,526 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 10:14:25,527 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 10:14:25,529 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=063141635a1fa2d615b283545d656db0, ASSIGN}] 2023-07-23 10:14:25,533 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=063141635a1fa2d615b283545d656db0, ASSIGN 2023-07-23 10:14:25,536 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=063141635a1fa2d615b283545d656db0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46561,1690107262307; forceNewPlan=false, retain=false 2023-07-23 10:14:25,688 INFO [jenkins-hbase4:34669] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 10:14:25,689 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=063141635a1fa2d615b283545d656db0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:25,690 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690107265689"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690107265689"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690107265689"}]},"ts":"1690107265689"} 2023-07-23 10:14:25,693 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 063141635a1fa2d615b283545d656db0, server=jenkins-hbase4.apache.org,46561,1690107262307}] 2023-07-23 10:14:25,850 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:25,851 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 10:14:25,855 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59234, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 10:14:25,865 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:25,865 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 063141635a1fa2d615b283545d656db0, NAME => 'hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.', STARTKEY => '', ENDKEY => ''} 2023-07-23 10:14:25,866 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,867 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:25,867 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,867 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,870 INFO [StoreOpener-063141635a1fa2d615b283545d656db0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,873 DEBUG [StoreOpener-063141635a1fa2d615b283545d656db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/info 2023-07-23 10:14:25,873 DEBUG [StoreOpener-063141635a1fa2d615b283545d656db0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/info 2023-07-23 10:14:25,873 INFO [StoreOpener-063141635a1fa2d615b283545d656db0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 063141635a1fa2d615b283545d656db0 columnFamilyName info 2023-07-23 10:14:25,874 INFO [StoreOpener-063141635a1fa2d615b283545d656db0-1] regionserver.HStore(310): Store=063141635a1fa2d615b283545d656db0/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:25,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:25,901 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 10:14:25,902 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 063141635a1fa2d615b283545d656db0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9554205120, jitterRate=-0.11019530892372131}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 10:14:25,902 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 063141635a1fa2d615b283545d656db0: 2023-07-23 10:14:25,906 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0., pid=6, masterSystemTime=1690107265850 2023-07-23 10:14:25,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:25,917 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:25,922 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=063141635a1fa2d615b283545d656db0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:25,922 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690107265921"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690107265921"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690107265921"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690107265921"}]},"ts":"1690107265921"} 2023-07-23 10:14:25,930 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-23 10:14:25,930 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 063141635a1fa2d615b283545d656db0, server=jenkins-hbase4.apache.org,46561,1690107262307 in 233 msec 2023-07-23 10:14:25,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-23 10:14:25,938 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=063141635a1fa2d615b283545d656db0, ASSIGN in 401 msec 2023-07-23 10:14:25,941 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 10:14:25,941 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690107265941"}]},"ts":"1690107265941"} 2023-07-23 10:14:25,946 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 10:14:25,950 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 10:14:25,951 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 10:14:25,953 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 10:14:25,953 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:25,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 620 msec 2023-07-23 10:14:25,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 10:14:25,996 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59242, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 10:14:26,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 10:14:26,036 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 10:14:26,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 38 msec 2023-07-23 10:14:26,050 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 10:14:26,064 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 10:14:26,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 20 msec 2023-07-23 10:14:26,083 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 10:14:26,085 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 10:14:26,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.356sec 2023-07-23 10:14:26,088 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 10:14:26,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 10:14:26,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 10:14:26,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34669,1690107260184-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 10:14:26,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34669,1690107260184-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 10:14:26,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 10:14:26,177 DEBUG [Listener at localhost/34007] zookeeper.ReadOnlyZKClient(139): Connect 0x3208489b to 127.0.0.1:60205 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 10:14:26,183 DEBUG [Listener at localhost/34007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31811c8e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 10:14:26,199 DEBUG [hconnection-0x6c5c034d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 10:14:26,210 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49450, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 10:14:26,220 INFO [Listener at localhost/34007] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:26,230 DEBUG [Listener at localhost/34007] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 10:14:26,234 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57512, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 10:14:26,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34669] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'TestCP', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver|1073741823|'}}, {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 10:14:26,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34669] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestCP 2023-07-23 10:14:26,253 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 10:14:26,256 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 10:14:26,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34669] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "TestCP" procId is: 9 2023-07-23 10:14:26,259 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,259 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210 empty. 2023-07-23 10:14:26,262 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,262 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestCP regions 2023-07-23 10:14:26,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34669] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-23 10:14:26,286 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp/data/default/TestCP/.tabledesc/.tableinfo.0000000001 2023-07-23 10:14:26,288 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(7675): creating {ENCODED => f1d952fb54c89ff06ad39296e8b9a210, NAME => 'TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestCP', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver|1073741823|'}}, {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/.tmp 2023-07-23 10:14:26,317 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(866): Instantiated TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:26,317 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1604): Closing f1d952fb54c89ff06ad39296e8b9a210, disabling compactions & flushes 2023-07-23 10:14:26,317 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1626): Closing region TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:26,317 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:26,317 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1714): Acquired close lock on TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. after waiting 0 ms 2023-07-23 10:14:26,317 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1724): Updates disabled for region TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:26,317 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1838): Closed TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:26,317 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1558): Region close journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:26,321 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 10:14:26,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1690107266323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690107266323"}]},"ts":"1690107266323"} 2023-07-23 10:14:26,325 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 10:14:26,327 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 10:14:26,327 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestCP","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690107266327"}]},"ts":"1690107266327"} 2023-07-23 10:14:26,330 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestCP, state=ENABLING in hbase:meta 2023-07-23 10:14:26,334 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 10:14:26,335 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 10:14:26,335 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 10:14:26,335 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 10:14:26,335 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 10:14:26,335 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestCP, region=f1d952fb54c89ff06ad39296e8b9a210, ASSIGN}] 2023-07-23 10:14:26,337 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestCP, region=f1d952fb54c89ff06ad39296e8b9a210, ASSIGN 2023-07-23 10:14:26,338 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestCP, region=f1d952fb54c89ff06ad39296e8b9a210, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46313,1690107262103; forceNewPlan=false, retain=false 2023-07-23 10:14:26,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34669] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-23 10:14:26,489 INFO [jenkins-hbase4:34669] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 10:14:26,490 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f1d952fb54c89ff06ad39296e8b9a210, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:26,490 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1690107266490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690107266490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690107266490"}]},"ts":"1690107266490"} 2023-07-23 10:14:26,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103}] 2023-07-23 10:14:26,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34669] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-23 10:14:26,648 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:26,649 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 10:14:26,652 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 10:14:26,657 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:26,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f1d952fb54c89ff06ad39296e8b9a210, NAME => 'TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.', STARTKEY => '', ENDKEY => ''} 2023-07-23 10:14:26,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver with path null and priority 1073741823 2023-07-23 10:14:26,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver from HTD of TestCP successfully. 2023-07-23 10:14:26,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestCP f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 10:14:26,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,663 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,666 INFO [StoreOpener-f1d952fb54c89ff06ad39296e8b9a210-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,668 DEBUG [StoreOpener-f1d952fb54c89ff06ad39296e8b9a210-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf 2023-07-23 10:14:26,668 DEBUG [StoreOpener-f1d952fb54c89ff06ad39296e8b9a210-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf 2023-07-23 10:14:26,669 INFO [StoreOpener-f1d952fb54c89ff06ad39296e8b9a210-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f1d952fb54c89ff06ad39296e8b9a210 columnFamilyName cf 2023-07-23 10:14:26,670 INFO [StoreOpener-f1d952fb54c89ff06ad39296e8b9a210-1] regionserver.HStore(310): Store=f1d952fb54c89ff06ad39296e8b9a210/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 10:14:26,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:26,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 10:14:26,683 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f1d952fb54c89ff06ad39296e8b9a210; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11715494080, jitterRate=0.09109041094779968}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 10:14:26,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:26,685 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., pid=11, masterSystemTime=1690107266648 2023-07-23 10:14:26,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:26,690 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:26,691 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=f1d952fb54c89ff06ad39296e8b9a210, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:26,691 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1690107266691"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690107266691"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690107266691"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690107266691"}]},"ts":"1690107266691"} 2023-07-23 10:14:26,698 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-23 10:14:26,698 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 in 200 msec 2023-07-23 10:14:26,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-23 10:14:26,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestCP, region=f1d952fb54c89ff06ad39296e8b9a210, ASSIGN in 363 msec 2023-07-23 10:14:26,704 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 10:14:26,704 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestCP","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690107266704"}]},"ts":"1690107266704"} 2023-07-23 10:14:26,706 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestCP, state=ENABLED in hbase:meta 2023-07-23 10:14:26,710 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 10:14:26,715 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestCP in 461 msec 2023-07-23 10:14:26,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34669] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-23 10:14:26,878 INFO [Listener at localhost/34007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestCP, procId: 9 completed 2023-07-23 10:14:26,912 INFO [Listener at localhost/34007] hbase.ResourceChecker(147): before: coprocessor.example.TestWriteHeavyIncrementObserver#test Thread=413, OpenFileDescriptor=731, MaxFileDescriptor=60000, SystemLoadAverage=395, ProcessCount=178, AvailableMemoryMB=5984 2023-07-23 10:14:26,922 DEBUG [increment-8] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 10:14:26,928 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36996, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 10:14:27,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:27,166 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:27,394 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=299 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/c1c31871ba994c09a0966780e156215e 2023-07-23 10:14:27,472 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/c1c31871ba994c09a0966780e156215e as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/c1c31871ba994c09a0966780e156215e 2023-07-23 10:14:27,491 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/c1c31871ba994c09a0966780e156215e, entries=2, sequenceid=299, filesize=4.8 K 2023-07-23 10:14:27,501 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=19.83 KB/20304 for f1d952fb54c89ff06ad39296e8b9a210 in 335ms, sequenceid=299, compaction requested=false 2023-07-23 10:14:27,503 DEBUG [MemStoreFlusher.0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestCP' 2023-07-23 10:14:27,506 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:27,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:27,512 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:27,609 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=597 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/f4b91b9b5b584b2483ecfc3363c03172 2023-07-23 10:14:27,638 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/f4b91b9b5b584b2483ecfc3363c03172 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f4b91b9b5b584b2483ecfc3363c03172 2023-07-23 10:14:27,656 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f4b91b9b5b584b2483ecfc3363c03172, entries=2, sequenceid=597, filesize=4.8 K 2023-07-23 10:14:27,657 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=9.70 KB/9936 for f1d952fb54c89ff06ad39296e8b9a210 in 145ms, sequenceid=597, compaction requested=false 2023-07-23 10:14:27,658 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:27,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:27,730 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:27,821 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=894 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/0f7f5cbabe3d45a3a018215acfa9b882 2023-07-23 10:14:27,834 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/0f7f5cbabe3d45a3a018215acfa9b882 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0f7f5cbabe3d45a3a018215acfa9b882 2023-07-23 10:14:27,854 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0f7f5cbabe3d45a3a018215acfa9b882, entries=2, sequenceid=894, filesize=4.8 K 2023-07-23 10:14:27,856 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=10.20 KB/10440 for f1d952fb54c89ff06ad39296e8b9a210 in 126ms, sequenceid=894, compaction requested=true 2023-07-23 10:14:27,856 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:27,883 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:27,883 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:27,888 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14718 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:27,891 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:27,891 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:27,892 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/c1c31871ba994c09a0966780e156215e, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f4b91b9b5b584b2483ecfc3363c03172, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0f7f5cbabe3d45a3a018215acfa9b882] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=14.4 K 2023-07-23 10:14:27,894 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting c1c31871ba994c09a0966780e156215e, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=299, earliestPutTs=1730669841336320 2023-07-23 10:14:27,894 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting f4b91b9b5b584b2483ecfc3363c03172, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=597, earliestPutTs=1730669841580032 2023-07-23 10:14:27,895 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 0f7f5cbabe3d45a3a018215acfa9b882, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=894, earliestPutTs=1730669841933312 2023-07-23 10:14:27,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:27,944 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:27,961 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#3 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:28,057 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.30 KB at sequenceid=1200 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/0bee2f956adc4238af9bace17b44e5a9 2023-07-23 10:14:28,081 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/81080cdcfcfe453c974dbe963920e3ee as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/81080cdcfcfe453c974dbe963920e3ee 2023-07-23 10:14:28,083 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/0bee2f956adc4238af9bace17b44e5a9 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0bee2f956adc4238af9bace17b44e5a9 2023-07-23 10:14:28,099 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0bee2f956adc4238af9bace17b44e5a9, entries=2, sequenceid=1200, filesize=4.8 K 2023-07-23 10:14:28,103 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.30 KB/21816, heapSize ~66.52 KB/68112, currentSize=7.10 KB/7272 for f1d952fb54c89ff06ad39296e8b9a210 in 159ms, sequenceid=1200, compaction requested=false 2023-07-23 10:14:28,104 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:28,116 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 81080cdcfcfe453c974dbe963920e3ee(size=4.8 K), total size for store is 9.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:28,117 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:28,117 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107267860; duration=0sec 2023-07-23 10:14:28,118 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:28,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:28,211 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:28,241 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=1498 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3 2023-07-23 10:14:28,253 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3 2023-07-23 10:14:28,265 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3, entries=2, sequenceid=1498, filesize=4.8 K 2023-07-23 10:14:28,268 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=5.84 KB/5976 for f1d952fb54c89ff06ad39296e8b9a210 in 57ms, sequenceid=1498, compaction requested=true 2023-07-23 10:14:28,268 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:28,268 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:28,269 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:28,271 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14759 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:28,271 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:28,271 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:28,272 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/81080cdcfcfe453c974dbe963920e3ee, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0bee2f956adc4238af9bace17b44e5a9, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=14.4 K 2023-07-23 10:14:28,272 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 81080cdcfcfe453c974dbe963920e3ee, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=894, earliestPutTs=1730669841336320 2023-07-23 10:14:28,274 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 0bee2f956adc4238af9bace17b44e5a9, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1200, earliestPutTs=1730669842155522 2023-07-23 10:14:28,275 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting ee945ca2eb4a4bee9dd12e5a6bae05f3, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1498, earliestPutTs=1730669842391040 2023-07-23 10:14:28,301 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#6 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:28,371 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/568053a291c54cf4ae40b151ee7ae985 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/568053a291c54cf4ae40b151ee7ae985 2023-07-23 10:14:28,399 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 568053a291c54cf4ae40b151ee7ae985(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:28,400 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:28,403 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107268268; duration=0sec 2023-07-23 10:14:28,403 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:28,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:28,410 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:28,460 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=1796 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/a650378be8fc446b83c9b856e80f788b 2023-07-23 10:14:28,473 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/a650378be8fc446b83c9b856e80f788b as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a650378be8fc446b83c9b856e80f788b 2023-07-23 10:14:28,487 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a650378be8fc446b83c9b856e80f788b, entries=2, sequenceid=1796, filesize=4.8 K 2023-07-23 10:14:28,489 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=9.98 KB/10224 for f1d952fb54c89ff06ad39296e8b9a210 in 79ms, sequenceid=1796, compaction requested=false 2023-07-23 10:14:28,489 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:28,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:28,544 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:28,591 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=2094 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/a2cb58064b0e40098b487f9d5c594f04 2023-07-23 10:14:28,617 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/a2cb58064b0e40098b487f9d5c594f04 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a2cb58064b0e40098b487f9d5c594f04 2023-07-23 10:14:28,652 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a2cb58064b0e40098b487f9d5c594f04, entries=2, sequenceid=2094, filesize=4.8 K 2023-07-23 10:14:28,656 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=12.52 KB/12816 for f1d952fb54c89ff06ad39296e8b9a210 in 113ms, sequenceid=2094, compaction requested=true 2023-07-23 10:14:28,656 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:28,656 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:28,656 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:28,659 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14862 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:28,659 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:28,659 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:28,659 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/568053a291c54cf4ae40b151ee7ae985, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a650378be8fc446b83c9b856e80f788b, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a2cb58064b0e40098b487f9d5c594f04] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=14.5 K 2023-07-23 10:14:28,660 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 568053a291c54cf4ae40b151ee7ae985, keycount=2, bloomtype=ROW, size=4.9 K, encoding=NONE, compression=NONE, seqNum=1498, earliestPutTs=1730669841336320 2023-07-23 10:14:28,661 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting a650378be8fc446b83c9b856e80f788b, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1796, earliestPutTs=1730669842648065 2023-07-23 10:14:28,662 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting a2cb58064b0e40098b487f9d5c594f04, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2094, earliestPutTs=1730669842852864 2023-07-23 10:14:28,691 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#9 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:28,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:28,703 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:28,761 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/bf729376275f44e1acd4d71997d3ede7 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/bf729376275f44e1acd4d71997d3ede7 2023-07-23 10:14:28,789 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into bf729376275f44e1acd4d71997d3ede7(size=5.0 K), total size for store is 5.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:28,789 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:28,790 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107268656; duration=0sec 2023-07-23 10:14:28,790 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:29,084 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,084 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 3253 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329082, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3255 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329083, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,084 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 3254 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329082, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,085 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,085 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3257 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329083, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,085 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 3258 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329084, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 3256 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329083, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,086 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 3259 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329085, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,087 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,087 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 3260 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329085, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,087 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 3262 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329086, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3261 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329086, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,195 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.43 KB at sequenceid=2416 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/31aa24dc5ab84ff99d3465b93d25ad3b 2023-07-23 10:14:29,202 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,202 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,203 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 3274 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,203 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 3276 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,204 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 3277 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3273 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,205 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3280 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,206 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3279 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,206 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3281 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,206 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 3282 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 3275 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,207 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:29,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 3278 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107329202, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:29,219 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/31aa24dc5ab84ff99d3465b93d25ad3b as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/31aa24dc5ab84ff99d3465b93d25ad3b 2023-07-23 10:14:29,230 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/31aa24dc5ab84ff99d3465b93d25ad3b, entries=2, sequenceid=2416, filesize=4.8 K 2023-07-23 10:14:29,231 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.43 KB/22968, heapSize ~70.02 KB/71696, currentSize=59.91 KB/61344 for f1d952fb54c89ff06ad39296e8b9a210 in 528ms, sequenceid=2416, compaction requested=false 2023-07-23 10:14:29,231 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:29,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:29,409 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=60.05 KB heapSize=187.06 KB 2023-07-23 10:14:29,471 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=60.26 KB at sequenceid=3277 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/f6139c15e3984249996cebc5b93ab027 2023-07-23 10:14:29,483 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/f6139c15e3984249996cebc5b93ab027 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f6139c15e3984249996cebc5b93ab027 2023-07-23 10:14:29,506 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f6139c15e3984249996cebc5b93ab027, entries=2, sequenceid=3277, filesize=4.8 K 2023-07-23 10:14:29,507 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~60.26 KB/61704, heapSize ~187.70 KB/192208, currentSize=11.81 KB/12096 for f1d952fb54c89ff06ad39296e8b9a210 in 98ms, sequenceid=3277, compaction requested=true 2023-07-23 10:14:29,507 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:29,507 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:29,507 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:29,509 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14964 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:29,509 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:29,509 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:29,510 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/bf729376275f44e1acd4d71997d3ede7, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/31aa24dc5ab84ff99d3465b93d25ad3b, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f6139c15e3984249996cebc5b93ab027] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=14.6 K 2023-07-23 10:14:29,510 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting bf729376275f44e1acd4d71997d3ede7, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=2094, earliestPutTs=1730669841336320 2023-07-23 10:14:29,511 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 31aa24dc5ab84ff99d3465b93d25ad3b, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2416, earliestPutTs=1730669842989056 2023-07-23 10:14:29,512 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting f6139c15e3984249996cebc5b93ab027, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3277, earliestPutTs=1730669843161088 2023-07-23 10:14:29,555 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#12 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:29,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:29,597 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-23 10:14:29,650 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/8f667d7e2428400b9fe5a88c49c07162 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8f667d7e2428400b9fe5a88c49c07162 2023-07-23 10:14:29,675 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 8f667d7e2428400b9fe5a88c49c07162(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:29,675 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:29,675 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107269507; duration=0sec 2023-07-23 10:14:29,675 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:29,678 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.52 KB at sequenceid=3586 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/078135ca10a14589be9c8575b59d3a92 2023-07-23 10:14:29,692 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/078135ca10a14589be9c8575b59d3a92 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/078135ca10a14589be9c8575b59d3a92 2023-07-23 10:14:29,705 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/078135ca10a14589be9c8575b59d3a92, entries=2, sequenceid=3586, filesize=4.8 K 2023-07-23 10:14:29,712 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.52 KB/22032, heapSize ~67.17 KB/68784, currentSize=9.35 KB/9576 for f1d952fb54c89ff06ad39296e8b9a210 in 115ms, sequenceid=3586, compaction requested=false 2023-07-23 10:14:29,712 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:29,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:29,766 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-23 10:14:29,861 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=3886 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/2b8f03c3a649439bafb7a61e3c7f9133 2023-07-23 10:14:29,872 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/2b8f03c3a649439bafb7a61e3c7f9133 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2b8f03c3a649439bafb7a61e3c7f9133 2023-07-23 10:14:29,886 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2b8f03c3a649439bafb7a61e3c7f9133, entries=2, sequenceid=3886, filesize=4.8 K 2023-07-23 10:14:29,888 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=14.48 KB/14832 for f1d952fb54c89ff06ad39296e8b9a210 in 122ms, sequenceid=3886, compaction requested=true 2023-07-23 10:14:29,889 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:29,890 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:29,890 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:29,892 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15066 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:29,892 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:29,892 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:29,893 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8f667d7e2428400b9fe5a88c49c07162, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/078135ca10a14589be9c8575b59d3a92, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2b8f03c3a649439bafb7a61e3c7f9133] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=14.7 K 2023-07-23 10:14:29,893 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 8f667d7e2428400b9fe5a88c49c07162, keycount=2, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=3277, earliestPutTs=1730669841336320 2023-07-23 10:14:29,894 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 078135ca10a14589be9c8575b59d3a92, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3586, earliestPutTs=1730669843875840 2023-07-23 10:14:29,895 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 2b8f03c3a649439bafb7a61e3c7f9133, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3886, earliestPutTs=1730669844080640 2023-07-23 10:14:29,915 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#15 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:29,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:29,951 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-23 10:14:30,022 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/6458127e259d4c0daabda1c1c69cc9b2 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/6458127e259d4c0daabda1c1c69cc9b2 2023-07-23 10:14:30,034 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 6458127e259d4c0daabda1c1c69cc9b2(size=5.2 K), total size for store is 5.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:30,034 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,035 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107269889; duration=0sec 2023-07-23 10:14:30,036 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:30,093 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=4185 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/add55c59333847efb169a6a2ff688c30 2023-07-23 10:14:30,132 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/add55c59333847efb169a6a2ff688c30 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/add55c59333847efb169a6a2ff688c30 2023-07-23 10:14:30,149 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/add55c59333847efb169a6a2ff688c30, entries=2, sequenceid=4185, filesize=4.8 K 2023-07-23 10:14:30,156 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=18.98 KB/19440 for f1d952fb54c89ff06ad39296e8b9a210 in 205ms, sequenceid=4185, compaction requested=false 2023-07-23 10:14:30,156 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:30,247 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:30,332 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=4484 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/349cc493049442288ee858b6afb542e9 2023-07-23 10:14:30,367 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/349cc493049442288ee858b6afb542e9 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/349cc493049442288ee858b6afb542e9 2023-07-23 10:14:30,383 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/349cc493049442288ee858b6afb542e9, entries=2, sequenceid=4484, filesize=4.8 K 2023-07-23 10:14:30,385 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=15.12 KB/15480 for f1d952fb54c89ff06ad39296e8b9a210 in 138ms, sequenceid=4484, compaction requested=true 2023-07-23 10:14:30,386 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,386 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:30,386 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:30,388 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15168 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:30,388 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:30,388 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:30,388 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/6458127e259d4c0daabda1c1c69cc9b2, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/add55c59333847efb169a6a2ff688c30, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/349cc493049442288ee858b6afb542e9] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=14.8 K 2023-07-23 10:14:30,389 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 6458127e259d4c0daabda1c1c69cc9b2, keycount=2, bloomtype=ROW, size=5.2 K, encoding=NONE, compression=NONE, seqNum=3886, earliestPutTs=1730669841336320 2023-07-23 10:14:30,390 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting add55c59333847efb169a6a2ff688c30, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4185, earliestPutTs=1730669844240386 2023-07-23 10:14:30,393 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 349cc493049442288ee858b6afb542e9, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4484, earliestPutTs=1730669844430848 2023-07-23 10:14:30,423 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#18 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:30,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:30,425 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-23 10:14:30,559 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7df8ee60b926462ab77378f458dcc518 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7df8ee60b926462ab77378f458dcc518 2023-07-23 10:14:30,563 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=4783 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7b4cae49d85d4f3391df25646eb4bfb6 2023-07-23 10:14:30,575 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 7df8ee60b926462ab77378f458dcc518(size=5.3 K), total size for store is 5.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:30,575 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,575 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107270386; duration=0sec 2023-07-23 10:14:30,575 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:30,600 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7b4cae49d85d4f3391df25646eb4bfb6 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7b4cae49d85d4f3391df25646eb4bfb6 2023-07-23 10:14:30,609 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7b4cae49d85d4f3391df25646eb4bfb6, entries=2, sequenceid=4783, filesize=4.8 K 2023-07-23 10:14:30,610 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=18.21 KB/18648 for f1d952fb54c89ff06ad39296e8b9a210 in 186ms, sequenceid=4783, compaction requested=false 2023-07-23 10:14:30,610 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:30,622 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-23 10:14:30,684 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=5083 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7 2023-07-23 10:14:30,694 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7 2023-07-23 10:14:30,707 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7, entries=2, sequenceid=5083, filesize=4.8 K 2023-07-23 10:14:30,708 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=8.09 KB/8280 for f1d952fb54c89ff06ad39296e8b9a210 in 86ms, sequenceid=5083, compaction requested=true 2023-07-23 10:14:30,708 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,709 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:30,709 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:30,711 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15270 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:30,711 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:30,711 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:30,711 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7df8ee60b926462ab77378f458dcc518, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7b4cae49d85d4f3391df25646eb4bfb6, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=14.9 K 2023-07-23 10:14:30,712 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 7df8ee60b926462ab77378f458dcc518, keycount=2, bloomtype=ROW, size=5.3 K, encoding=NONE, compression=NONE, seqNum=4484, earliestPutTs=1730669841336320 2023-07-23 10:14:30,712 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 7b4cae49d85d4f3391df25646eb4bfb6, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4783, earliestPutTs=1730669844733952 2023-07-23 10:14:30,713 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 2a4062cbb1c7484da2bf5ea3eeeeb5e7, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5083, earliestPutTs=1730669844915200 2023-07-23 10:14:30,734 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 10:14:30,760 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#21 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:30,810 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/b69c7ca42b2a4d9ba5dce9a3c25b1473 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/b69c7ca42b2a4d9ba5dce9a3c25b1473 2023-07-23 10:14:30,824 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into b69c7ca42b2a4d9ba5dce9a3c25b1473(size=5.4 K), total size for store is 5.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:30,824 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,824 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107270709; duration=0sec 2023-07-23 10:14:30,825 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:30,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:30,828 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:30,869 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver 2023-07-23 10:14:30,870 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver Metrics about HBase RegionObservers 2023-07-23 10:14:30,871 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 10:14:30,871 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-23 10:14:30,880 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 10:14:30,881 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 10:14:30,891 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=5382 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7ace072f2f584801beba33c8c34735e3 2023-07-23 10:14:30,901 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7ace072f2f584801beba33c8c34735e3 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7ace072f2f584801beba33c8c34735e3 2023-07-23 10:14:30,910 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7ace072f2f584801beba33c8c34735e3, entries=2, sequenceid=5382, filesize=4.8 K 2023-07-23 10:14:30,911 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=14.41 KB/14760 for f1d952fb54c89ff06ad39296e8b9a210 in 84ms, sequenceid=5382, compaction requested=false 2023-07-23 10:14:30,911 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:30,946 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-23 10:14:30,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:31,018 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=5680 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/e027356d026445a2bdc1f1d57d92318d 2023-07-23 10:14:31,033 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/e027356d026445a2bdc1f1d57d92318d as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e027356d026445a2bdc1f1d57d92318d 2023-07-23 10:14:31,044 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e027356d026445a2bdc1f1d57d92318d, entries=2, sequenceid=5680, filesize=4.8 K 2023-07-23 10:14:31,045 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=9.77 KB/10008 for f1d952fb54c89ff06ad39296e8b9a210 in 99ms, sequenceid=5680, compaction requested=true 2023-07-23 10:14:31,045 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:31,045 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:31,045 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:31,049 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15372 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:31,049 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:31,049 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:31,049 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/b69c7ca42b2a4d9ba5dce9a3c25b1473, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7ace072f2f584801beba33c8c34735e3, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e027356d026445a2bdc1f1d57d92318d] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=15.0 K 2023-07-23 10:14:31,050 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting b69c7ca42b2a4d9ba5dce9a3c25b1473, keycount=2, bloomtype=ROW, size=5.4 K, encoding=NONE, compression=NONE, seqNum=5083, earliestPutTs=1730669841336320 2023-07-23 10:14:31,051 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 7ace072f2f584801beba33c8c34735e3, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5382, earliestPutTs=1730669845121024 2023-07-23 10:14:31,052 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting e027356d026445a2bdc1f1d57d92318d, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5680, earliestPutTs=1730669845327873 2023-07-23 10:14:31,100 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#24 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:31,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:31,123 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:31,245 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/5907376b87fc4f7c8995a5fbfaacb2c7 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/5907376b87fc4f7c8995a5fbfaacb2c7 2023-07-23 10:14:31,252 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=5978 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/ed11ea2c5d6b466cbb553558a8a09563 2023-07-23 10:14:31,261 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/ed11ea2c5d6b466cbb553558a8a09563 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ed11ea2c5d6b466cbb553558a8a09563 2023-07-23 10:14:31,265 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 5907376b87fc4f7c8995a5fbfaacb2c7(size=5.5 K), total size for store is 5.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:31,265 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:31,265 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107271045; duration=0sec 2023-07-23 10:14:31,265 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:31,277 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ed11ea2c5d6b466cbb553558a8a09563, entries=2, sequenceid=5978, filesize=4.8 K 2023-07-23 10:14:31,278 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=15.68 KB/16056 for f1d952fb54c89ff06ad39296e8b9a210 in 155ms, sequenceid=5978, compaction requested=false 2023-07-23 10:14:31,278 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:31,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:31,311 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:31,769 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,769 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,769 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 7140 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107331764, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7139 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107331764, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,770 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7142 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107331769, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,770 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7144 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107331769, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,770 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7145 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107331769, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,771 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7146 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107331769, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,771 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7147 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107331769, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,771 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7148 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107331769, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 7141 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107331768, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,770 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:31,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 7143 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107331769, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:31,810 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.60 KB at sequenceid=6275 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/19faabe2809f4b2c81004775e2d64865 2023-07-23 10:14:31,823 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/19faabe2809f4b2c81004775e2d64865 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/19faabe2809f4b2c81004775e2d64865 2023-07-23 10:14:31,837 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/19faabe2809f4b2c81004775e2d64865, entries=2, sequenceid=6275, filesize=4.8 K 2023-07-23 10:14:31,838 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.60 KB/21096, heapSize ~64.33 KB/65872, currentSize=61.73 KB/63216 for f1d952fb54c89ff06ad39296e8b9a210 in 527ms, sequenceid=6275, compaction requested=true 2023-07-23 10:14:31,838 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:31,838 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:31,838 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:31,840 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15474 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:31,840 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:31,840 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:31,840 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/5907376b87fc4f7c8995a5fbfaacb2c7, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ed11ea2c5d6b466cbb553558a8a09563, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/19faabe2809f4b2c81004775e2d64865] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=15.1 K 2023-07-23 10:14:31,841 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 5907376b87fc4f7c8995a5fbfaacb2c7, keycount=2, bloomtype=ROW, size=5.5 K, encoding=NONE, compression=NONE, seqNum=5680, earliestPutTs=1730669841336320 2023-07-23 10:14:31,841 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting ed11ea2c5d6b466cbb553558a8a09563, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5978, earliestPutTs=1730669845448705 2023-07-23 10:14:31,842 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 19faabe2809f4b2c81004775e2d64865, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=6275, earliestPutTs=1730669845635072 2023-07-23 10:14:31,871 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#27 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:31,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:31,879 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=61.88 KB heapSize=192.75 KB 2023-07-23 10:14:31,999 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/02573aab11aa413ebe60b8746b415589 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/02573aab11aa413ebe60b8746b415589 2023-07-23 10:14:32,030 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 02573aab11aa413ebe60b8746b415589(size=5.6 K), total size for store is 5.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:32,030 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,030 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107271838; duration=0sec 2023-07-23 10:14:32,031 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:32,032 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=61.95 KB at sequenceid=7159 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/a28a3e07d93f472bb2dd0a8294e40979 2023-07-23 10:14:32,042 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/a28a3e07d93f472bb2dd0a8294e40979 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a28a3e07d93f472bb2dd0a8294e40979 2023-07-23 10:14:32,047 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 7452 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107332045, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,048 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7454 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107332048, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,048 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 7453 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107332048, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,049 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 7457 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107332049, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,050 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 7458 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107332049, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,050 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 7459 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107332049, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,051 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,051 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a28a3e07d93f472bb2dd0a8294e40979, entries=2, sequenceid=7159, filesize=4.8 K 2023-07-23 10:14:32,051 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:32,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 7456 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107332049, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 7460 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107332049, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:32,052 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~61.95 KB/63432, heapSize ~192.95 KB/197584, currentSize=20.39 KB/20880 for f1d952fb54c89ff06ad39296e8b9a210 in 174ms, sequenceid=7159, compaction requested=false 2023-07-23 10:14:32,052 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:32,054 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.67 KB heapSize=64.56 KB 2023-07-23 10:14:32,112 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=7457 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/8bad4357f3564f9e94febcb32e52f610 2023-07-23 10:14:32,128 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/8bad4357f3564f9e94febcb32e52f610 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8bad4357f3564f9e94febcb32e52f610 2023-07-23 10:14:32,136 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8bad4357f3564f9e94febcb32e52f610, entries=2, sequenceid=7457, filesize=4.8 K 2023-07-23 10:14:32,137 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=5.98 KB/6120 for f1d952fb54c89ff06ad39296e8b9a210 in 83ms, sequenceid=7457, compaction requested=true 2023-07-23 10:14:32,137 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,137 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:32,137 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:32,139 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15576 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:32,139 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:32,139 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:32,139 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/02573aab11aa413ebe60b8746b415589, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a28a3e07d93f472bb2dd0a8294e40979, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8bad4357f3564f9e94febcb32e52f610] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=15.2 K 2023-07-23 10:14:32,140 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 02573aab11aa413ebe60b8746b415589, keycount=2, bloomtype=ROW, size=5.6 K, encoding=NONE, compression=NONE, seqNum=6275, earliestPutTs=1730669841336320 2023-07-23 10:14:32,140 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting a28a3e07d93f472bb2dd0a8294e40979, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7159, earliestPutTs=1730669845826560 2023-07-23 10:14:32,141 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 8bad4357f3564f9e94febcb32e52f610, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7457, earliestPutTs=1730669846406144 2023-07-23 10:14:32,195 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#30 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:32,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:32,246 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-23 10:14:32,331 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7d29403460974928bf121ae00ee92d72 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7d29403460974928bf121ae00ee92d72 2023-07-23 10:14:32,359 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 7d29403460974928bf121ae00ee92d72(size=5.7 K), total size for store is 5.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:32,359 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,360 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107272137; duration=0sec 2023-07-23 10:14:32,360 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:32,381 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=7756 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/40fbdb49b2c04dccb344da4b1c8b2b89 2023-07-23 10:14:32,391 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/40fbdb49b2c04dccb344da4b1c8b2b89 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/40fbdb49b2c04dccb344da4b1c8b2b89 2023-07-23 10:14:32,401 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/40fbdb49b2c04dccb344da4b1c8b2b89, entries=2, sequenceid=7756, filesize=4.8 K 2023-07-23 10:14:32,403 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=11.04 KB/11304 for f1d952fb54c89ff06ad39296e8b9a210 in 157ms, sequenceid=7756, compaction requested=false 2023-07-23 10:14:32,403 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:32,489 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-23 10:14:32,564 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=8056 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/9966ca9d07d84001b7315bdd59b99935 2023-07-23 10:14:32,573 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/9966ca9d07d84001b7315bdd59b99935 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/9966ca9d07d84001b7315bdd59b99935 2023-07-23 10:14:32,596 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/9966ca9d07d84001b7315bdd59b99935, entries=2, sequenceid=8056, filesize=4.8 K 2023-07-23 10:14:32,597 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=16.95 KB/17352 for f1d952fb54c89ff06ad39296e8b9a210 in 108ms, sequenceid=8056, compaction requested=true 2023-07-23 10:14:32,597 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,597 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:32,597 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:32,600 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15678 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:32,600 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:32,600 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:32,600 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7d29403460974928bf121ae00ee92d72, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/40fbdb49b2c04dccb344da4b1c8b2b89, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/9966ca9d07d84001b7315bdd59b99935] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=15.3 K 2023-07-23 10:14:32,601 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 7d29403460974928bf121ae00ee92d72, keycount=2, bloomtype=ROW, size=5.7 K, encoding=NONE, compression=NONE, seqNum=7457, earliestPutTs=1730669841336320 2023-07-23 10:14:32,601 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 40fbdb49b2c04dccb344da4b1c8b2b89, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7756, earliestPutTs=1730669846583296 2023-07-23 10:14:32,602 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 9966ca9d07d84001b7315bdd59b99935, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8056, earliestPutTs=1730669846800384 2023-07-23 10:14:32,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:32,623 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.67 KB heapSize=64.56 KB 2023-07-23 10:14:32,624 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#33 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:32,650 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.88 KB at sequenceid=8356 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/64a317c44fb0405fbdaae0606d517dad 2023-07-23 10:14:32,658 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/64a317c44fb0405fbdaae0606d517dad as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/64a317c44fb0405fbdaae0606d517dad 2023-07-23 10:14:32,661 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/845e90f096bc4c90b3999922d137fa30 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/845e90f096bc4c90b3999922d137fa30 2023-07-23 10:14:32,666 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/64a317c44fb0405fbdaae0606d517dad, entries=2, sequenceid=8356, filesize=4.8 K 2023-07-23 10:14:32,667 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.88 KB/21384, heapSize ~65.20 KB/66768, currentSize=7.66 KB/7848 for f1d952fb54c89ff06ad39296e8b9a210 in 46ms, sequenceid=8356, compaction requested=false 2023-07-23 10:14:32,667 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,670 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 845e90f096bc4c90b3999922d137fa30(size=5.8 K), total size for store is 10.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:32,670 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,670 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107272597; duration=0sec 2023-07-23 10:14:32,671 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:32,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:32,708 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=20.67 KB heapSize=64.56 KB 2023-07-23 10:14:32,818 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.88 KB at sequenceid=8657 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/384947ae9a95497fa0ee02862d58519e 2023-07-23 10:14:32,837 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/384947ae9a95497fa0ee02862d58519e as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/384947ae9a95497fa0ee02862d58519e 2023-07-23 10:14:32,846 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/384947ae9a95497fa0ee02862d58519e, entries=2, sequenceid=8657, filesize=4.8 K 2023-07-23 10:14:32,850 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.88 KB/21384, heapSize ~65.20 KB/66768, currentSize=32.27 KB/33048 for f1d952fb54c89ff06ad39296e8b9a210 in 142ms, sequenceid=8657, compaction requested=true 2023-07-23 10:14:32,850 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:32,850 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:32,850 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:32,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:32,850 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=32.48 KB heapSize=101.31 KB 2023-07-23 10:14:32,853 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15780 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:32,853 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:32,854 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:32,854 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/845e90f096bc4c90b3999922d137fa30, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/64a317c44fb0405fbdaae0606d517dad, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/384947ae9a95497fa0ee02862d58519e] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=15.4 K 2023-07-23 10:14:32,854 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 845e90f096bc4c90b3999922d137fa30, keycount=2, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=8056, earliestPutTs=1730669841336320 2023-07-23 10:14:32,855 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 64a317c44fb0405fbdaae0606d517dad, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8356, earliestPutTs=1730669847029760 2023-07-23 10:14:32,856 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 384947ae9a95497fa0ee02862d58519e, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8657, earliestPutTs=1730669847166977 2023-07-23 10:14:32,887 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#37 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:33,054 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/cd4bf55986da412281a69c2b80e6a488 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/cd4bf55986da412281a69c2b80e6a488 2023-07-23 10:14:33,066 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into cd4bf55986da412281a69c2b80e6a488(size=5.9 K), total size for store is 5.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:33,066 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:33,067 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107272850; duration=0sec 2023-07-23 10:14:33,067 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:33,203 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9829 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333200, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,205 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9830 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333205, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,208 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9831 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333208, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,209 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9832 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333209, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,209 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9833 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333209, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,215 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9834 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333215, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,215 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9835 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107333215, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,223 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 9836 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333223, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,319 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 9839 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333319, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,319 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 9844 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333319, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,320 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 9847 service: ClientService methodName: Mutate size: 200 connection: 172.31.14.131:36996 deadline: 1690107333319, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,320 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46313] ipc.CallRunner(144): callId: 9848 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333319, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,319 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46313] ipc.CallRunner(144): callId: 9841 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333319, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,319 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 9843 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333319, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,325 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 9850 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333325, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,325 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 10:14:33,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] ipc.CallRunner(144): callId: 9852 service: ClientService methodName: Mutate size: 199 connection: 172.31.14.131:36996 deadline: 1690107333325, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=f1d952fb54c89ff06ad39296e8b9a210, server=jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,360 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=32.70 KB at sequenceid=9125 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7af00871314e4c868ea0f21b6b5c44ec 2023-07-23 10:14:33,381 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/7af00871314e4c868ea0f21b6b5c44ec as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7af00871314e4c868ea0f21b6b5c44ec 2023-07-23 10:14:33,396 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7af00871314e4c868ea0f21b6b5c44ec, entries=2, sequenceid=9125, filesize=4.8 K 2023-07-23 10:14:33,397 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~32.70 KB/33480, heapSize ~101.95 KB/104400, currentSize=49.64 KB/50832 for f1d952fb54c89ff06ad39296e8b9a210 in 547ms, sequenceid=9125, compaction requested=false 2023-07-23 10:14:33,397 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:33,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46313] regionserver.HRegion(9158): Flush requested on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:33,524 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=49.85 KB heapSize=155.34 KB 2023-07-23 10:14:33,578 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=49.99 KB at sequenceid=9840 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/ddc0a143c5324db8813701229159edd4 2023-07-23 10:14:33,590 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/ddc0a143c5324db8813701229159edd4 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ddc0a143c5324db8813701229159edd4 2023-07-23 10:14:33,616 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ddc0a143c5324db8813701229159edd4, entries=2, sequenceid=9840, filesize=4.8 K 2023-07-23 10:14:33,620 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~49.99 KB/51192, heapSize ~155.77 KB/159504, currentSize=13.01 KB/13320 for f1d952fb54c89ff06ad39296e8b9a210 in 95ms, sequenceid=9840, compaction requested=true 2023-07-23 10:14:33,620 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:33,620 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:33,620 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-23 10:14:33,622 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15882 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-23 10:14:33,623 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating minor compaction (all files) 2023-07-23 10:14:33,623 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:33,623 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/cd4bf55986da412281a69c2b80e6a488, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7af00871314e4c868ea0f21b6b5c44ec, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ddc0a143c5324db8813701229159edd4] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=15.5 K 2023-07-23 10:14:33,624 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting cd4bf55986da412281a69c2b80e6a488, keycount=2, bloomtype=ROW, size=5.9 K, encoding=NONE, compression=NONE, seqNum=8657, earliestPutTs=1730669841336320 2023-07-23 10:14:33,624 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting 7af00871314e4c868ea0f21b6b5c44ec, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9125, earliestPutTs=1730669847252995 2023-07-23 10:14:33,625 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] compactions.Compactor(207): Compacting ddc0a143c5324db8813701229159edd4, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9840, earliestPutTs=1730669847401472 2023-07-23 10:14:33,668 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#39 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:33,719 INFO [Listener at localhost/34007] regionserver.HRegion(2745): Flushing f1d952fb54c89ff06ad39296e8b9a210 1/1 column families, dataSize=17.86 KB heapSize=55.81 KB 2023-07-23 10:14:33,720 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/e3f08745062b4c2b9766da0a6d5cff1c as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e3f08745062b4c2b9766da0a6d5cff1c 2023-07-23 10:14:33,729 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into e3f08745062b4c2b9766da0a6d5cff1c(size=6.0 K), total size for store is 6.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:33,729 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:33,729 INFO [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210., storeName=f1d952fb54c89ff06ad39296e8b9a210/cf, priority=13, startTime=1690107273620; duration=0sec 2023-07-23 10:14:33,729 DEBUG [RS:0;jenkins-hbase4:46313-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-23 10:14:33,739 INFO [Listener at localhost/34007] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=17.86 KB at sequenceid=10097 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/e51bf898f0af4729a3b318ec80677fe3 2023-07-23 10:14:33,747 DEBUG [Listener at localhost/34007] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/e51bf898f0af4729a3b318ec80677fe3 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e51bf898f0af4729a3b318ec80677fe3 2023-07-23 10:14:33,753 INFO [Listener at localhost/34007] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e51bf898f0af4729a3b318ec80677fe3, entries=2, sequenceid=10097, filesize=4.8 K 2023-07-23 10:14:33,754 INFO [Listener at localhost/34007] regionserver.HRegion(2948): Finished flush of dataSize ~17.86 KB/18288, heapSize ~55.80 KB/57136, currentSize=0 B/0 for f1d952fb54c89ff06ad39296e8b9a210 in 35ms, sequenceid=10097, compaction requested=false 2023-07-23 10:14:33,755 DEBUG [Listener at localhost/34007] regionserver.HRegion(2446): Flush status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:33,755 DEBUG [Listener at localhost/34007] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-07-23 10:14:33,755 DEBUG [Listener at localhost/34007] regionserver.HStore(1912): f1d952fb54c89ff06ad39296e8b9a210/cf is initiating major compaction (all files) 2023-07-23 10:14:33,755 INFO [Listener at localhost/34007] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 10:14:33,755 INFO [Listener at localhost/34007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 10:14:33,756 INFO [Listener at localhost/34007] regionserver.HRegion(2259): Starting compaction of f1d952fb54c89ff06ad39296e8b9a210/cf in TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:33,756 INFO [Listener at localhost/34007] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e3f08745062b4c2b9766da0a6d5cff1c, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e51bf898f0af4729a3b318ec80677fe3] into tmpdir=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp, totalSize=10.8 K 2023-07-23 10:14:33,756 DEBUG [Listener at localhost/34007] compactions.Compactor(207): Compacting e3f08745062b4c2b9766da0a6d5cff1c, keycount=2, bloomtype=ROW, size=6.0 K, encoding=NONE, compression=NONE, seqNum=9840, earliestPutTs=1730669841336320 2023-07-23 10:14:33,757 DEBUG [Listener at localhost/34007] compactions.Compactor(207): Compacting e51bf898f0af4729a3b318ec80677fe3, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=10097, earliestPutTs=1730669848089600 2023-07-23 10:14:33,765 INFO [Listener at localhost/34007] throttle.PressureAwareThroughputController(145): f1d952fb54c89ff06ad39296e8b9a210#cf#compaction#41 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-23 10:14:33,789 DEBUG [Listener at localhost/34007] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/.tmp/cf/621ac45f41304ad7a94b3c396435d1ae as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/621ac45f41304ad7a94b3c396435d1ae 2023-07-23 10:14:33,796 INFO [Listener at localhost/34007] regionserver.HStore(1652): Completed major compaction of 2 (all) file(s) in f1d952fb54c89ff06ad39296e8b9a210/cf of f1d952fb54c89ff06ad39296e8b9a210 into 621ac45f41304ad7a94b3c396435d1ae(size=6.1 K), total size for store is 6.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-23 10:14:33,796 DEBUG [Listener at localhost/34007] regionserver.HRegion(2289): Compaction status journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:33,828 INFO [Listener at localhost/34007] hbase.ResourceChecker(175): after: coprocessor.example.TestWriteHeavyIncrementObserver#test Thread=447 (was 413) Potentially hanging thread: hconnection-0x6c5c034d-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:49318 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60342 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:36998 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60316 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60402 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60148 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60480 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35418 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35404 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60398 [Waiting for operation #13] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35348 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60522 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:36988 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1573002493_17 at /127.0.0.1:35498 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:41610 [Waiting for operation #14] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6c5c034d-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:46313-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60334 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35274 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:41682 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35286 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35212 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35270 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60356 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35452 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60108 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1573002493_17 at /127.0.0.1:60542 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:35244 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60468 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60228 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60536 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60486 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60574 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2097662139_17 at /127.0.0.1:60132 [Waiting for operation #18] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x6c5c034d-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=895 (was 731) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=451 (was 395) - SystemLoadAverage LEAK? -, ProcessCount=175 (was 178), AvailableMemoryMB=6410 (was 5984) - AvailableMemoryMB LEAK? - 2023-07-23 10:14:33,830 INFO [Listener at localhost/34007] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 10:14:33,830 INFO [Listener at localhost/34007] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 10:14:33,831 DEBUG [Listener at localhost/34007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3208489b to 127.0.0.1:60205 2023-07-23 10:14:33,831 DEBUG [Listener at localhost/34007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:33,832 DEBUG [Listener at localhost/34007] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 10:14:33,832 DEBUG [Listener at localhost/34007] util.JVMClusterUtil(257): Found active master hash=1439797923, stopped=false 2023-07-23 10:14:33,832 INFO [Listener at localhost/34007] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:33,834 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:33,835 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:33,835 INFO [Listener at localhost/34007] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 10:14:33,835 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:33,835 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:33,835 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 10:14:33,836 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:33,836 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:33,836 DEBUG [Listener at localhost/34007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7fe65171 to 127.0.0.1:60205 2023-07-23 10:14:33,836 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:33,836 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 10:14:33,837 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1064): Closing user regions 2023-07-23 10:14:33,837 DEBUG [Listener at localhost/34007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:33,837 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(3305): Received CLOSE for 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:33,837 INFO [Listener at localhost/34007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46313,1690107262103' ***** 2023-07-23 10:14:33,837 INFO [Listener at localhost/34007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 10:14:33,837 INFO [Listener at localhost/34007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46561,1690107262307' ***** 2023-07-23 10:14:33,837 INFO [Listener at localhost/34007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 10:14:33,837 INFO [Listener at localhost/34007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45649,1690107262489' ***** 2023-07-23 10:14:33,838 INFO [Listener at localhost/34007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 10:14:33,838 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 10:14:33,837 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 10:14:33,839 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 10:14:33,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 063141635a1fa2d615b283545d656db0, disabling compactions & flushes 2023-07-23 10:14:33,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:33,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:33,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. after waiting 0 ms 2023-07-23 10:14:33,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:33,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 063141635a1fa2d615b283545d656db0 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-23 10:14:33,854 INFO [RS:2;jenkins-hbase4:45649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@56f29558{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 10:14:33,854 INFO [RS:0;jenkins-hbase4:46313] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@64f60122{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 10:14:33,854 INFO [RS:1;jenkins-hbase4:46561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@12fc3501{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 10:14:33,860 INFO [RS:2;jenkins-hbase4:45649] server.AbstractConnector(383): Stopped ServerConnector@45900881{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 10:14:33,860 INFO [RS:0;jenkins-hbase4:46313] server.AbstractConnector(383): Stopped ServerConnector@380b280{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 10:14:33,860 INFO [RS:0;jenkins-hbase4:46313] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 10:14:33,860 INFO [RS:2;jenkins-hbase4:45649] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 10:14:33,861 INFO [RS:0;jenkins-hbase4:46313] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34b1c900{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 10:14:33,860 INFO [RS:1;jenkins-hbase4:46561] server.AbstractConnector(383): Stopped ServerConnector@2a761dcc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 10:14:33,861 INFO [RS:1;jenkins-hbase4:46561] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 10:14:33,862 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 10:14:33,862 INFO [RS:2;jenkins-hbase4:45649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7deee04d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 10:14:33,862 INFO [RS:0;jenkins-hbase4:46313] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d4846ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,STOPPED} 2023-07-23 10:14:33,863 INFO [RS:2;jenkins-hbase4:45649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5dabd4cf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,STOPPED} 2023-07-23 10:14:33,862 INFO [RS:1;jenkins-hbase4:46561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@704fb744{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 10:14:33,862 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 10:14:33,864 INFO [RS:1;jenkins-hbase4:46561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@642a45f6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,STOPPED} 2023-07-23 10:14:33,866 INFO [RS:0;jenkins-hbase4:46313] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 10:14:33,866 INFO [RS:1;jenkins-hbase4:46561] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 10:14:33,866 INFO [RS:0;jenkins-hbase4:46313] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 10:14:33,866 INFO [RS:1;jenkins-hbase4:46561] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 10:14:33,867 INFO [RS:0;jenkins-hbase4:46313] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 10:14:33,866 INFO [RS:2;jenkins-hbase4:45649] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 10:14:33,867 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 10:14:33,867 INFO [RS:2;jenkins-hbase4:45649] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 10:14:33,867 INFO [RS:2;jenkins-hbase4:45649] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 10:14:33,867 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:33,867 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(3305): Received CLOSE for f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:33,867 INFO [RS:1;jenkins-hbase4:46561] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 10:14:33,866 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 10:14:33,867 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:33,867 DEBUG [RS:2;jenkins-hbase4:45649] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5f33889d to 127.0.0.1:60205 2023-07-23 10:14:33,868 DEBUG [RS:1;jenkins-hbase4:46561] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6ea5b49a to 127.0.0.1:60205 2023-07-23 10:14:33,868 DEBUG [RS:2;jenkins-hbase4:45649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:33,868 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 10:14:33,868 INFO [RS:2;jenkins-hbase4:45649] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 10:14:33,868 INFO [RS:2;jenkins-hbase4:45649] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 10:14:33,868 INFO [RS:2;jenkins-hbase4:45649] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 10:14:33,868 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 10:14:33,868 DEBUG [RS:1;jenkins-hbase4:46561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:33,868 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 10:14:33,868 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 10:14:33,869 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1478): Online Regions={063141635a1fa2d615b283545d656db0=hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0.} 2023-07-23 10:14:33,869 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1504): Waiting on 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:33,870 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:33,870 DEBUG [RS:0;jenkins-hbase4:46313] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x499f79fa to 127.0.0.1:60205 2023-07-23 10:14:33,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f1d952fb54c89ff06ad39296e8b9a210, disabling compactions & flushes 2023-07-23 10:14:33,870 DEBUG [RS:0;jenkins-hbase4:46313] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:33,870 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 10:14:33,870 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1478): Online Regions={f1d952fb54c89ff06ad39296e8b9a210=TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.} 2023-07-23 10:14:33,870 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1504): Waiting on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:33,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:33,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:33,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. after waiting 0 ms 2023-07-23 10:14:33,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:33,879 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 10:14:33,879 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-23 10:14:33,879 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 10:14:33,880 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 10:14:33,880 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 10:14:33,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 10:14:33,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 10:14:33,884 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 10:14:33,884 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.41 KB heapSize=4.93 KB 2023-07-23 10:14:33,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/.tmp/info/b0e906b6ccf94f3cbc1251428780726c 2023-07-23 10:14:33,932 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/c1c31871ba994c09a0966780e156215e, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f4b91b9b5b584b2483ecfc3363c03172, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/81080cdcfcfe453c974dbe963920e3ee, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0f7f5cbabe3d45a3a018215acfa9b882, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0bee2f956adc4238af9bace17b44e5a9, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/568053a291c54cf4ae40b151ee7ae985, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a650378be8fc446b83c9b856e80f788b, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/bf729376275f44e1acd4d71997d3ede7, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a2cb58064b0e40098b487f9d5c594f04, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/31aa24dc5ab84ff99d3465b93d25ad3b, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8f667d7e2428400b9fe5a88c49c07162, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f6139c15e3984249996cebc5b93ab027, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/078135ca10a14589be9c8575b59d3a92, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/6458127e259d4c0daabda1c1c69cc9b2, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2b8f03c3a649439bafb7a61e3c7f9133, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/add55c59333847efb169a6a2ff688c30, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7df8ee60b926462ab77378f458dcc518, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/349cc493049442288ee858b6afb542e9, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7b4cae49d85d4f3391df25646eb4bfb6, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/b69c7ca42b2a4d9ba5dce9a3c25b1473, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7ace072f2f584801beba33c8c34735e3, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/5907376b87fc4f7c8995a5fbfaacb2c7, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e027356d026445a2bdc1f1d57d92318d, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ed11ea2c5d6b466cbb553558a8a09563, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/02573aab11aa413ebe60b8746b415589, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/19faabe2809f4b2c81004775e2d64865, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a28a3e07d93f472bb2dd0a8294e40979, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7d29403460974928bf121ae00ee92d72, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8bad4357f3564f9e94febcb32e52f610, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/40fbdb49b2c04dccb344da4b1c8b2b89, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/845e90f096bc4c90b3999922d137fa30, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/9966ca9d07d84001b7315bdd59b99935, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/64a317c44fb0405fbdaae0606d517dad, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/cd4bf55986da412281a69c2b80e6a488, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/384947ae9a95497fa0ee02862d58519e, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7af00871314e4c868ea0f21b6b5c44ec, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e3f08745062b4c2b9766da0a6d5cff1c, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ddc0a143c5324db8813701229159edd4, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e51bf898f0af4729a3b318ec80677fe3] to archive 2023-07-23 10:14:33,935 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-23 10:14:33,961 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/c1c31871ba994c09a0966780e156215e to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/c1c31871ba994c09a0966780e156215e 2023-07-23 10:14:33,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/.tmp/info/b0e906b6ccf94f3cbc1251428780726c as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/info/b0e906b6ccf94f3cbc1251428780726c 2023-07-23 10:14:33,964 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f4b91b9b5b584b2483ecfc3363c03172 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f4b91b9b5b584b2483ecfc3363c03172 2023-07-23 10:14:33,980 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/81080cdcfcfe453c974dbe963920e3ee to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/81080cdcfcfe453c974dbe963920e3ee 2023-07-23 10:14:33,984 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0f7f5cbabe3d45a3a018215acfa9b882 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0f7f5cbabe3d45a3a018215acfa9b882 2023-07-23 10:14:33,987 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0bee2f956adc4238af9bace17b44e5a9 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/0bee2f956adc4238af9bace17b44e5a9 2023-07-23 10:14:33,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/info/b0e906b6ccf94f3cbc1251428780726c, entries=2, sequenceid=6, filesize=4.8 K 2023-07-23 10:14:33,992 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/568053a291c54cf4ae40b151ee7ae985 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/568053a291c54cf4ae40b151ee7ae985 2023-07-23 10:14:33,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 063141635a1fa2d615b283545d656db0 in 146ms, sequenceid=6, compaction requested=false 2023-07-23 10:14:33,996 WARN [DataStreamer for file /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/.tmp/info/619fd5c5ca0e43f39b191f2813561039] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-07-23 10:14:34,001 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.25 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/.tmp/info/619fd5c5ca0e43f39b191f2813561039 2023-07-23 10:14:34,011 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ee945ca2eb4a4bee9dd12e5a6bae05f3 2023-07-23 10:14:34,019 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a650378be8fc446b83c9b856e80f788b to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a650378be8fc446b83c9b856e80f788b 2023-07-23 10:14:34,021 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/bf729376275f44e1acd4d71997d3ede7 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/bf729376275f44e1acd4d71997d3ede7 2023-07-23 10:14:34,024 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a2cb58064b0e40098b487f9d5c594f04 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a2cb58064b0e40098b487f9d5c594f04 2023-07-23 10:14:34,026 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/31aa24dc5ab84ff99d3465b93d25ad3b to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/31aa24dc5ab84ff99d3465b93d25ad3b 2023-07-23 10:14:34,029 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8f667d7e2428400b9fe5a88c49c07162 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8f667d7e2428400b9fe5a88c49c07162 2023-07-23 10:14:34,037 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f6139c15e3984249996cebc5b93ab027 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/f6139c15e3984249996cebc5b93ab027 2023-07-23 10:14:34,040 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/078135ca10a14589be9c8575b59d3a92 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/078135ca10a14589be9c8575b59d3a92 2023-07-23 10:14:34,050 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/6458127e259d4c0daabda1c1c69cc9b2 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/6458127e259d4c0daabda1c1c69cc9b2 2023-07-23 10:14:34,053 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2b8f03c3a649439bafb7a61e3c7f9133 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2b8f03c3a649439bafb7a61e3c7f9133 2023-07-23 10:14:34,056 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/add55c59333847efb169a6a2ff688c30 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/add55c59333847efb169a6a2ff688c30 2023-07-23 10:14:34,058 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7df8ee60b926462ab77378f458dcc518 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7df8ee60b926462ab77378f458dcc518 2023-07-23 10:14:34,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/namespace/063141635a1fa2d615b283545d656db0/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-23 10:14:34,060 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/349cc493049442288ee858b6afb542e9 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/349cc493049442288ee858b6afb542e9 2023-07-23 10:14:34,062 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7b4cae49d85d4f3391df25646eb4bfb6 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7b4cae49d85d4f3391df25646eb4bfb6 2023-07-23 10:14:34,063 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/b69c7ca42b2a4d9ba5dce9a3c25b1473 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/b69c7ca42b2a4d9ba5dce9a3c25b1473 2023-07-23 10:14:34,065 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/2a4062cbb1c7484da2bf5ea3eeeeb5e7 2023-07-23 10:14:34,067 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7ace072f2f584801beba33c8c34735e3 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7ace072f2f584801beba33c8c34735e3 2023-07-23 10:14:34,069 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/5907376b87fc4f7c8995a5fbfaacb2c7 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/5907376b87fc4f7c8995a5fbfaacb2c7 2023-07-23 10:14:34,069 DEBUG [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1504): Waiting on 063141635a1fa2d615b283545d656db0 2023-07-23 10:14:34,070 DEBUG [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1504): Waiting on f1d952fb54c89ff06ad39296e8b9a210 2023-07-23 10:14:34,071 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e027356d026445a2bdc1f1d57d92318d to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e027356d026445a2bdc1f1d57d92318d 2023-07-23 10:14:34,072 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ed11ea2c5d6b466cbb553558a8a09563 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ed11ea2c5d6b466cbb553558a8a09563 2023-07-23 10:14:34,083 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:34,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 063141635a1fa2d615b283545d656db0: 2023-07-23 10:14:34,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690107265326.063141635a1fa2d615b283545d656db0. 2023-07-23 10:14:34,087 DEBUG [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 10:14:34,087 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/02573aab11aa413ebe60b8746b415589 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/02573aab11aa413ebe60b8746b415589 2023-07-23 10:14:34,093 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/19faabe2809f4b2c81004775e2d64865 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/19faabe2809f4b2c81004775e2d64865 2023-07-23 10:14:34,096 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a28a3e07d93f472bb2dd0a8294e40979 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/a28a3e07d93f472bb2dd0a8294e40979 2023-07-23 10:14:34,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=170 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/.tmp/table/957147d113ce4becaa3f053f04edb32c 2023-07-23 10:14:34,100 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7d29403460974928bf121ae00ee92d72 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7d29403460974928bf121ae00ee92d72 2023-07-23 10:14:34,102 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8bad4357f3564f9e94febcb32e52f610 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/8bad4357f3564f9e94febcb32e52f610 2023-07-23 10:14:34,105 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/40fbdb49b2c04dccb344da4b1c8b2b89 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/40fbdb49b2c04dccb344da4b1c8b2b89 2023-07-23 10:14:34,107 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/.tmp/info/619fd5c5ca0e43f39b191f2813561039 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/info/619fd5c5ca0e43f39b191f2813561039 2023-07-23 10:14:34,107 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/845e90f096bc4c90b3999922d137fa30 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/845e90f096bc4c90b3999922d137fa30 2023-07-23 10:14:34,109 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/9966ca9d07d84001b7315bdd59b99935 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/9966ca9d07d84001b7315bdd59b99935 2023-07-23 10:14:34,111 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/64a317c44fb0405fbdaae0606d517dad to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/64a317c44fb0405fbdaae0606d517dad 2023-07-23 10:14:34,113 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/cd4bf55986da412281a69c2b80e6a488 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/cd4bf55986da412281a69c2b80e6a488 2023-07-23 10:14:34,115 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/info/619fd5c5ca0e43f39b191f2813561039, entries=20, sequenceid=14, filesize=6.9 K 2023-07-23 10:14:34,117 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/384947ae9a95497fa0ee02862d58519e to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/384947ae9a95497fa0ee02862d58519e 2023-07-23 10:14:34,117 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/.tmp/table/957147d113ce4becaa3f053f04edb32c as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/table/957147d113ce4becaa3f053f04edb32c 2023-07-23 10:14:34,121 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7af00871314e4c868ea0f21b6b5c44ec to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/7af00871314e4c868ea0f21b6b5c44ec 2023-07-23 10:14:34,124 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/table/957147d113ce4becaa3f053f04edb32c, entries=4, sequenceid=14, filesize=4.7 K 2023-07-23 10:14:34,125 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e3f08745062b4c2b9766da0a6d5cff1c to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e3f08745062b4c2b9766da0a6d5cff1c 2023-07-23 10:14:34,125 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.41 KB/2469, heapSize ~4.65 KB/4760, currentSize=0 B/0 for 1588230740 in 241ms, sequenceid=14, compaction requested=false 2023-07-23 10:14:34,127 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ddc0a143c5324db8813701229159edd4 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/ddc0a143c5324db8813701229159edd4 2023-07-23 10:14:34,130 DEBUG [StoreCloser-TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e51bf898f0af4729a3b318ec80677fe3 to hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/archive/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/cf/e51bf898f0af4729a3b318ec80677fe3 2023-07-23 10:14:34,176 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-23 10:14:34,177 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 10:14:34,179 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 10:14:34,179 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 10:14:34,179 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 10:14:34,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/data/default/TestCP/f1d952fb54c89ff06ad39296e8b9a210/recovered.edits/10102.seqid, newMaxSeqId=10102, maxSeqId=1 2023-07-23 10:14:34,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver 2023-07-23 10:14:34,186 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:34,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f1d952fb54c89ff06ad39296e8b9a210: 2023-07-23 10:14:34,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed TestCP,,1690107266243.f1d952fb54c89ff06ad39296e8b9a210. 2023-07-23 10:14:34,270 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46561,1690107262307; all regions closed. 2023-07-23 10:14:34,271 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46313,1690107262103; all regions closed. 2023-07-23 10:14:34,287 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45649,1690107262489; all regions closed. 2023-07-23 10:14:34,296 DEBUG [RS:0;jenkins-hbase4:46313] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs 2023-07-23 10:14:34,296 INFO [RS:0;jenkins-hbase4:46313] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46313%2C1690107262103:(num 1690107264775) 2023-07-23 10:14:34,296 DEBUG [RS:0;jenkins-hbase4:46313] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:34,296 INFO [RS:0;jenkins-hbase4:46313] regionserver.LeaseManager(133): Closed leases 2023-07-23 10:14:34,296 INFO [RS:0;jenkins-hbase4:46313] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 10:14:34,296 INFO [RS:0;jenkins-hbase4:46313] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 10:14:34,297 INFO [RS:0;jenkins-hbase4:46313] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 10:14:34,297 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 10:14:34,297 INFO [RS:0;jenkins-hbase4:46313] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 10:14:34,297 DEBUG [RS:1;jenkins-hbase4:46561] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs 2023-07-23 10:14:34,297 INFO [RS:1;jenkins-hbase4:46561] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46561%2C1690107262307:(num 1690107264775) 2023-07-23 10:14:34,297 DEBUG [RS:1;jenkins-hbase4:46561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:34,297 INFO [RS:1;jenkins-hbase4:46561] regionserver.LeaseManager(133): Closed leases 2023-07-23 10:14:34,297 INFO [RS:1;jenkins-hbase4:46561] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 10:14:34,301 INFO [RS:0;jenkins-hbase4:46313] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46313 2023-07-23 10:14:34,302 INFO [RS:1;jenkins-hbase4:46561] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 10:14:34,302 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 10:14:34,302 INFO [RS:1;jenkins-hbase4:46561] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 10:14:34,302 INFO [RS:1;jenkins-hbase4:46561] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 10:14:34,304 INFO [RS:1;jenkins-hbase4:46561] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46561 2023-07-23 10:14:34,306 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,45649,1690107262489/jenkins-hbase4.apache.org%2C45649%2C1690107262489.meta.1690107264981.meta not finished, retry = 0 2023-07-23 10:14:34,314 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:34,314 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:34,314 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 10:14:34,314 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46313,1690107262103 2023-07-23 10:14:34,314 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 10:14:34,314 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 10:14:34,314 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 10:14:34,315 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:34,315 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:34,315 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46561,1690107262307 2023-07-23 10:14:34,316 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46561,1690107262307] 2023-07-23 10:14:34,316 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46561,1690107262307; numProcessing=1 2023-07-23 10:14:34,320 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46561,1690107262307 already deleted, retry=false 2023-07-23 10:14:34,320 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46561,1690107262307 expired; onlineServers=2 2023-07-23 10:14:34,320 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46313,1690107262103] 2023-07-23 10:14:34,320 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46313,1690107262103; numProcessing=2 2023-07-23 10:14:34,415 DEBUG [RS:2;jenkins-hbase4:45649] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs 2023-07-23 10:14:34,415 INFO [RS:2;jenkins-hbase4:45649] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45649%2C1690107262489.meta:.meta(num 1690107264981) 2023-07-23 10:14:34,420 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,420 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x10191acac940002, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,420 INFO [RS:1;jenkins-hbase4:46561] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46561,1690107262307; zookeeper connection closed. 2023-07-23 10:14:34,421 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46313,1690107262103 already deleted, retry=false 2023-07-23 10:14:34,421 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46313,1690107262103 expired; onlineServers=1 2023-07-23 10:14:34,440 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,440 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:46313-0x10191acac940001, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,440 INFO [RS:0;jenkins-hbase4:46313] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46313,1690107262103; zookeeper connection closed. 2023-07-23 10:14:34,442 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@19f0951e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@19f0951e 2023-07-23 10:14:34,443 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@abe3c09] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@abe3c09 2023-07-23 10:14:34,456 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/WALs/jenkins-hbase4.apache.org,45649,1690107262489/jenkins-hbase4.apache.org%2C45649%2C1690107262489.1690107264777 not finished, retry = 0 2023-07-23 10:14:34,559 DEBUG [RS:2;jenkins-hbase4:45649] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/oldWALs 2023-07-23 10:14:34,559 INFO [RS:2;jenkins-hbase4:45649] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45649%2C1690107262489:(num 1690107264777) 2023-07-23 10:14:34,559 DEBUG [RS:2;jenkins-hbase4:45649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:34,559 INFO [RS:2;jenkins-hbase4:45649] regionserver.LeaseManager(133): Closed leases 2023-07-23 10:14:34,560 INFO [RS:2;jenkins-hbase4:45649] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 10:14:34,561 INFO [RS:2;jenkins-hbase4:45649] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45649 2023-07-23 10:14:34,562 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 10:14:34,564 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45649,1690107262489 2023-07-23 10:14:34,564 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 10:14:34,567 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45649,1690107262489] 2023-07-23 10:14:34,567 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45649,1690107262489; numProcessing=3 2023-07-23 10:14:34,569 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45649,1690107262489 already deleted, retry=false 2023-07-23 10:14:34,569 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45649,1690107262489 expired; onlineServers=0 2023-07-23 10:14:34,569 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34669,1690107260184' ***** 2023-07-23 10:14:34,569 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 10:14:34,573 DEBUG [M:0;jenkins-hbase4:34669] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@14ec9898, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 10:14:34,573 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 10:14:34,581 INFO [M:0;jenkins-hbase4:34669] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@42024fc3{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 10:14:34,581 INFO [M:0;jenkins-hbase4:34669] server.AbstractConnector(383): Stopped ServerConnector@6f4a5cb0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 10:14:34,581 INFO [M:0;jenkins-hbase4:34669] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 10:14:34,582 INFO [M:0;jenkins-hbase4:34669] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@cea58e2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 10:14:34,583 INFO [M:0;jenkins-hbase4:34669] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d32a63c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/hadoop.log.dir/,STOPPED} 2023-07-23 10:14:34,584 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34669,1690107260184 2023-07-23 10:14:34,584 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34669,1690107260184; all regions closed. 2023-07-23 10:14:34,584 DEBUG [M:0;jenkins-hbase4:34669] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 10:14:34,584 INFO [M:0;jenkins-hbase4:34669] master.HMaster(1491): Stopping master jetty server 2023-07-23 10:14:34,585 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 10:14:34,586 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 10:14:34,586 INFO [M:0;jenkins-hbase4:34669] server.AbstractConnector(383): Stopped ServerConnector@3146e66c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 10:14:34,586 DEBUG [M:0;jenkins-hbase4:34669] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 10:14:34,586 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 10:14:34,586 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 10:14:34,586 DEBUG [M:0;jenkins-hbase4:34669] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 10:14:34,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690107264330] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690107264330,5,FailOnTimeoutGroup] 2023-07-23 10:14:34,587 INFO [M:0;jenkins-hbase4:34669] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 10:14:34,587 INFO [M:0;jenkins-hbase4:34669] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 10:14:34,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690107264339] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690107264339,5,FailOnTimeoutGroup] 2023-07-23 10:14:34,587 INFO [M:0;jenkins-hbase4:34669] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 10:14:34,587 DEBUG [M:0;jenkins-hbase4:34669] master.HMaster(1512): Stopping service threads 2023-07-23 10:14:34,587 INFO [M:0;jenkins-hbase4:34669] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 10:14:34,588 ERROR [M:0;jenkins-hbase4:34669] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] 2023-07-23 10:14:34,588 INFO [M:0;jenkins-hbase4:34669] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 10:14:34,588 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 10:14:34,589 DEBUG [M:0;jenkins-hbase4:34669] zookeeper.ZKUtil(398): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 10:14:34,589 WARN [M:0;jenkins-hbase4:34669] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 10:14:34,589 INFO [M:0;jenkins-hbase4:34669] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 10:14:34,589 INFO [M:0;jenkins-hbase4:34669] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 10:14:34,589 DEBUG [M:0;jenkins-hbase4:34669] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 10:14:34,590 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:34,590 DEBUG [M:0;jenkins-hbase4:34669] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:34,590 DEBUG [M:0;jenkins-hbase4:34669] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 10:14:34,590 DEBUG [M:0;jenkins-hbase4:34669] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:34,590 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=37.93 KB heapSize=45.59 KB 2023-07-23 10:14:34,608 INFO [M:0;jenkins-hbase4:34669] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.93 KB at sequenceid=91 (bloomFilter=true), to=hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4a0723b3b84444f4a3e6d4a52d5037e1 2023-07-23 10:14:34,620 DEBUG [M:0;jenkins-hbase4:34669] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4a0723b3b84444f4a3e6d4a52d5037e1 as hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4a0723b3b84444f4a3e6d4a52d5037e1 2023-07-23 10:14:34,626 INFO [M:0;jenkins-hbase4:34669] regionserver.HStore(1080): Added hdfs://localhost:35371/user/jenkins/test-data/76e9b0ea-8b5e-ed12-9cda-f23fff5c3b8d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4a0723b3b84444f4a3e6d4a52d5037e1, entries=11, sequenceid=91, filesize=7.1 K 2023-07-23 10:14:34,627 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegion(2948): Finished flush of dataSize ~37.93 KB/38844, heapSize ~45.57 KB/46664, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 37ms, sequenceid=91, compaction requested=false 2023-07-23 10:14:34,629 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 10:14:34,629 DEBUG [M:0;jenkins-hbase4:34669] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 10:14:34,641 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 10:14:34,641 INFO [M:0;jenkins-hbase4:34669] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 10:14:34,641 INFO [M:0;jenkins-hbase4:34669] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34669 2023-07-23 10:14:34,644 DEBUG [M:0;jenkins-hbase4:34669] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34669,1690107260184 already deleted, retry=false 2023-07-23 10:14:34,667 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,667 INFO [RS:2;jenkins-hbase4:45649] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45649,1690107262489; zookeeper connection closed. 2023-07-23 10:14:34,667 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): regionserver:45649-0x10191acac940003, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,668 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5ec041b7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5ec041b7 2023-07-23 10:14:34,668 INFO [Listener at localhost/34007] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-23 10:14:34,767 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,767 INFO [M:0;jenkins-hbase4:34669] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34669,1690107260184; zookeeper connection closed. 2023-07-23 10:14:34,767 DEBUG [Listener at localhost/34007-EventThread] zookeeper.ZKWatcher(600): master:34669-0x10191acac940000, quorum=127.0.0.1:60205, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 10:14:34,769 WARN [Listener at localhost/34007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 10:14:34,774 INFO [Listener at localhost/34007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 10:14:34,879 WARN [BP-390822221-172.31.14.131-1690107256171 heartbeating to localhost/127.0.0.1:35371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 10:14:34,879 WARN [BP-390822221-172.31.14.131-1690107256171 heartbeating to localhost/127.0.0.1:35371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-390822221-172.31.14.131-1690107256171 (Datanode Uuid 761557c1-a685-43d1-9930-9fa03170d606) service to localhost/127.0.0.1:35371 2023-07-23 10:14:34,881 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/dfs/data/data5/current/BP-390822221-172.31.14.131-1690107256171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 10:14:34,881 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/dfs/data/data6/current/BP-390822221-172.31.14.131-1690107256171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 10:14:34,883 WARN [Listener at localhost/34007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 10:14:34,890 INFO [Listener at localhost/34007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 10:14:34,994 WARN [BP-390822221-172.31.14.131-1690107256171 heartbeating to localhost/127.0.0.1:35371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 10:14:34,995 WARN [BP-390822221-172.31.14.131-1690107256171 heartbeating to localhost/127.0.0.1:35371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-390822221-172.31.14.131-1690107256171 (Datanode Uuid c55724ba-a955-4469-9ba1-0e7c2f75465a) service to localhost/127.0.0.1:35371 2023-07-23 10:14:34,995 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/dfs/data/data3/current/BP-390822221-172.31.14.131-1690107256171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 10:14:34,996 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/dfs/data/data4/current/BP-390822221-172.31.14.131-1690107256171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 10:14:34,998 WARN [Listener at localhost/34007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 10:14:35,000 INFO [Listener at localhost/34007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 10:14:35,006 WARN [BP-390822221-172.31.14.131-1690107256171 heartbeating to localhost/127.0.0.1:35371] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 10:14:35,006 WARN [BP-390822221-172.31.14.131-1690107256171 heartbeating to localhost/127.0.0.1:35371] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-390822221-172.31.14.131-1690107256171 (Datanode Uuid f3954b8f-2f0a-4be4-9cbc-88621755376b) service to localhost/127.0.0.1:35371 2023-07-23 10:14:35,007 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/dfs/data/data1/current/BP-390822221-172.31.14.131-1690107256171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 10:14:35,007 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/f6d24384-7f1b-af51-948c-7734df9d716e/cluster_3605e493-c4ff-9248-8bc1-c2484cdc5541/dfs/data/data2/current/BP-390822221-172.31.14.131-1690107256171] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 10:14:35,042 INFO [Listener at localhost/34007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 10:14:35,068 INFO [Listener at localhost/34007] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 10:14:35,162 INFO [Listener at localhost/34007] hbase.HBaseTestingUtility(1293): Minicluster is down