2023-07-21 13:20:06,940 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c 2023-07-21 13:20:06,953 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.coprocessor.example.TestWriteHeavyIncrementObserver timeout: 13 mins 2023-07-21 13:20:06,967 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-21 13:20:06,968 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b, deleteOnExit=true 2023-07-21 13:20:06,968 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-21 13:20:06,969 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/test.cache.data in system properties and HBase conf 2023-07-21 13:20:06,969 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.tmp.dir in system properties and HBase conf 2023-07-21 13:20:06,970 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir in system properties and HBase conf 2023-07-21 13:20:06,970 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-21 13:20:06,971 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-21 13:20:06,971 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-21 13:20:07,133 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-21 13:20:07,606 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-21 13:20:07,613 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-21 13:20:07,613 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-21 13:20:07,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-21 13:20:07,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 13:20:07,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-21 13:20:07,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-21 13:20:07,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-21 13:20:07,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 13:20:07,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-21 13:20:07,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/nfs.dump.dir in system properties and HBase conf 2023-07-21 13:20:07,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir in system properties and HBase conf 2023-07-21 13:20:07,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-21 13:20:07,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-21 13:20:07,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-21 13:20:08,267 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 13:20:08,270 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 13:20:09,015 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-21 13:20:09,318 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-21 13:20:09,348 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 13:20:09,402 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 13:20:09,443 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/Jetty_localhost_localdomain_44083_hdfs____a1tbwl/webapp 2023-07-21 13:20:09,589 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:44083 2023-07-21 13:20:09,623 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-21 13:20:09,623 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-21 13:20:10,242 WARN [Listener at localhost.localdomain/43421] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 13:20:10,349 WARN [Listener at localhost.localdomain/43421] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 13:20:10,374 WARN [Listener at localhost.localdomain/43421] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 13:20:10,382 INFO [Listener at localhost.localdomain/43421] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 13:20:10,390 INFO [Listener at localhost.localdomain/43421] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/Jetty_localhost_40907_datanode____.9j347p/webapp 2023-07-21 13:20:10,507 INFO [Listener at localhost.localdomain/43421] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40907 2023-07-21 13:20:10,908 WARN [Listener at localhost.localdomain/45253] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 13:20:11,004 WARN [Listener at localhost.localdomain/45253] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 13:20:11,008 WARN [Listener at localhost.localdomain/45253] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 13:20:11,010 INFO [Listener at localhost.localdomain/45253] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 13:20:11,018 INFO [Listener at localhost.localdomain/45253] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/Jetty_localhost_43169_datanode____.a2bjd0/webapp 2023-07-21 13:20:11,129 INFO [Listener at localhost.localdomain/45253] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43169 2023-07-21 13:20:11,141 WARN [Listener at localhost.localdomain/34867] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 13:20:11,163 WARN [Listener at localhost.localdomain/34867] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-21 13:20:11,170 WARN [Listener at localhost.localdomain/34867] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-21 13:20:11,172 INFO [Listener at localhost.localdomain/34867] log.Slf4jLog(67): jetty-6.1.26 2023-07-21 13:20:11,189 INFO [Listener at localhost.localdomain/34867] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/Jetty_localhost_36483_datanode____tev3kj/webapp 2023-07-21 13:20:11,281 INFO [Listener at localhost.localdomain/34867] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36483 2023-07-21 13:20:11,289 WARN [Listener at localhost.localdomain/36547] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-21 13:20:12,459 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x17209b2a9a2d731: Processing first storage report for DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77 from datanode b5f0925d-ca2e-43cc-a0f1-405edcfdcfe2 2023-07-21 13:20:12,460 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x17209b2a9a2d731: from storage DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77 node DatanodeRegistration(127.0.0.1:44461, datanodeUuid=b5f0925d-ca2e-43cc-a0f1-405edcfdcfe2, infoPort=36873, infoSecurePort=0, ipcPort=45253, storageInfo=lv=-57;cid=testClusterID;nsid=35718142;c=1689945608330), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-21 13:20:12,460 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4103e270c5f7b64: Processing first storage report for DS-1f748205-1567-42fb-bfb1-399a5d715114 from datanode c007261b-aac0-48ab-a6f4-89117298d36b 2023-07-21 13:20:12,460 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4103e270c5f7b64: from storage DS-1f748205-1567-42fb-bfb1-399a5d715114 node DatanodeRegistration(127.0.0.1:34567, datanodeUuid=c007261b-aac0-48ab-a6f4-89117298d36b, infoPort=46381, infoSecurePort=0, ipcPort=34867, storageInfo=lv=-57;cid=testClusterID;nsid=35718142;c=1689945608330), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 13:20:12,460 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c3949af14f2faf7: Processing first storage report for DS-1a78b274-4530-4d2f-8ed7-d62ddf857176 from datanode 0a09a4fb-0bb2-423c-9922-8302db1fb4b9 2023-07-21 13:20:12,461 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c3949af14f2faf7: from storage DS-1a78b274-4530-4d2f-8ed7-d62ddf857176 node DatanodeRegistration(127.0.0.1:33219, datanodeUuid=0a09a4fb-0bb2-423c-9922-8302db1fb4b9, infoPort=41869, infoSecurePort=0, ipcPort=36547, storageInfo=lv=-57;cid=testClusterID;nsid=35718142;c=1689945608330), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 13:20:12,461 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x17209b2a9a2d731: Processing first storage report for DS-acbef042-e5fe-4b0e-8441-d07e290058cb from datanode b5f0925d-ca2e-43cc-a0f1-405edcfdcfe2 2023-07-21 13:20:12,461 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x17209b2a9a2d731: from storage DS-acbef042-e5fe-4b0e-8441-d07e290058cb node DatanodeRegistration(127.0.0.1:44461, datanodeUuid=b5f0925d-ca2e-43cc-a0f1-405edcfdcfe2, infoPort=36873, infoSecurePort=0, ipcPort=45253, storageInfo=lv=-57;cid=testClusterID;nsid=35718142;c=1689945608330), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 13:20:12,461 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4103e270c5f7b64: Processing first storage report for DS-9f49c940-a037-4d8a-ab27-02919386cf8d from datanode c007261b-aac0-48ab-a6f4-89117298d36b 2023-07-21 13:20:12,461 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4103e270c5f7b64: from storage DS-9f49c940-a037-4d8a-ab27-02919386cf8d node DatanodeRegistration(127.0.0.1:34567, datanodeUuid=c007261b-aac0-48ab-a6f4-89117298d36b, infoPort=46381, infoSecurePort=0, ipcPort=34867, storageInfo=lv=-57;cid=testClusterID;nsid=35718142;c=1689945608330), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 13:20:12,461 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c3949af14f2faf7: Processing first storage report for DS-0b092d46-224a-4d3d-86c0-f06de478f2cd from datanode 0a09a4fb-0bb2-423c-9922-8302db1fb4b9 2023-07-21 13:20:12,461 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c3949af14f2faf7: from storage DS-0b092d46-224a-4d3d-86c0-f06de478f2cd node DatanodeRegistration(127.0.0.1:33219, datanodeUuid=0a09a4fb-0bb2-423c-9922-8302db1fb4b9, infoPort=41869, infoSecurePort=0, ipcPort=36547, storageInfo=lv=-57;cid=testClusterID;nsid=35718142;c=1689945608330), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-21 13:20:12,518 DEBUG [Listener at localhost.localdomain/36547] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c 2023-07-21 13:20:12,616 INFO [Listener at localhost.localdomain/36547] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/zookeeper_0, clientPort=61652, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-21 13:20:12,636 INFO [Listener at localhost.localdomain/36547] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61652 2023-07-21 13:20:12,643 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:12,646 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:13,308 INFO [Listener at localhost.localdomain/36547] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff with version=8 2023-07-21 13:20:13,309 INFO [Listener at localhost.localdomain/36547] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/hbase-staging 2023-07-21 13:20:13,329 DEBUG [Listener at localhost.localdomain/36547] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-21 13:20:13,329 DEBUG [Listener at localhost.localdomain/36547] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-21 13:20:13,329 DEBUG [Listener at localhost.localdomain/36547] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-21 13:20:13,329 DEBUG [Listener at localhost.localdomain/36547] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-21 13:20:13,639 INFO [Listener at localhost.localdomain/36547] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-21 13:20:14,122 INFO [Listener at localhost.localdomain/36547] client.ConnectionUtils(127): master/jenkins-hbase16:0 server-side Connection retries=45 2023-07-21 13:20:14,149 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:14,150 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:14,150 INFO [Listener at localhost.localdomain/36547] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 13:20:14,150 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:14,150 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 13:20:14,291 INFO [Listener at localhost.localdomain/36547] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 13:20:14,353 DEBUG [Listener at localhost.localdomain/36547] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-21 13:20:14,454 INFO [Listener at localhost.localdomain/36547] ipc.NettyRpcServer(120): Bind to /188.40.62.62:40019 2023-07-21 13:20:14,464 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:14,466 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:14,484 INFO [Listener at localhost.localdomain/36547] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40019 connecting to ZooKeeper ensemble=127.0.0.1:61652 2023-07-21 13:20:14,588 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:400190x0, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 13:20:14,591 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40019-0x1018809df7a0000 connected 2023-07-21 13:20:14,661 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 13:20:14,662 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:14,666 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 13:20:14,674 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40019 2023-07-21 13:20:14,674 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40019 2023-07-21 13:20:14,675 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40019 2023-07-21 13:20:14,675 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40019 2023-07-21 13:20:14,675 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40019 2023-07-21 13:20:14,721 INFO [Listener at localhost.localdomain/36547] log.Log(170): Logging initialized @8536ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-21 13:20:14,877 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 13:20:14,878 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 13:20:14,878 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 13:20:14,880 INFO [Listener at localhost.localdomain/36547] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-21 13:20:14,880 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 13:20:14,881 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 13:20:14,883 INFO [Listener at localhost.localdomain/36547] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 13:20:14,959 INFO [Listener at localhost.localdomain/36547] http.HttpServer(1146): Jetty bound to port 35009 2023-07-21 13:20:14,962 INFO [Listener at localhost.localdomain/36547] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 13:20:15,002 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,005 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@55824320{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,AVAILABLE} 2023-07-21 13:20:15,006 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,006 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@d35e35c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 13:20:15,186 INFO [Listener at localhost.localdomain/36547] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 13:20:15,201 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 13:20:15,201 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 13:20:15,204 INFO [Listener at localhost.localdomain/36547] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 13:20:15,215 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,245 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5e70675{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/jetty-0_0_0_0-35009-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7328153172163776897/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 13:20:15,262 INFO [Listener at localhost.localdomain/36547] server.AbstractConnector(333): Started ServerConnector@70928ca1{HTTP/1.1, (http/1.1)}{0.0.0.0:35009} 2023-07-21 13:20:15,263 INFO [Listener at localhost.localdomain/36547] server.Server(415): Started @9078ms 2023-07-21 13:20:15,267 INFO [Listener at localhost.localdomain/36547] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff, hbase.cluster.distributed=false 2023-07-21 13:20:15,331 INFO [Listener at localhost.localdomain/36547] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-07-21 13:20:15,331 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,332 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,332 INFO [Listener at localhost.localdomain/36547] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 13:20:15,332 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,332 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 13:20:15,338 INFO [Listener at localhost.localdomain/36547] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 13:20:15,342 INFO [Listener at localhost.localdomain/36547] ipc.NettyRpcServer(120): Bind to /188.40.62.62:39771 2023-07-21 13:20:15,344 INFO [Listener at localhost.localdomain/36547] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 13:20:15,352 DEBUG [Listener at localhost.localdomain/36547] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 13:20:15,354 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:15,356 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:15,358 INFO [Listener at localhost.localdomain/36547] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39771 connecting to ZooKeeper ensemble=127.0.0.1:61652 2023-07-21 13:20:15,368 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:397710x0, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 13:20:15,369 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39771-0x1018809df7a0001 connected 2023-07-21 13:20:15,369 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 13:20:15,373 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:15,374 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 13:20:15,374 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39771 2023-07-21 13:20:15,375 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39771 2023-07-21 13:20:15,375 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39771 2023-07-21 13:20:15,377 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39771 2023-07-21 13:20:15,377 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39771 2023-07-21 13:20:15,379 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 13:20:15,379 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 13:20:15,379 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 13:20:15,381 INFO [Listener at localhost.localdomain/36547] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 13:20:15,381 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 13:20:15,381 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 13:20:15,381 INFO [Listener at localhost.localdomain/36547] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 13:20:15,383 INFO [Listener at localhost.localdomain/36547] http.HttpServer(1146): Jetty bound to port 37441 2023-07-21 13:20:15,384 INFO [Listener at localhost.localdomain/36547] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 13:20:15,393 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,393 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@58a6ace9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,AVAILABLE} 2023-07-21 13:20:15,394 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,394 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@51b50322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 13:20:15,516 INFO [Listener at localhost.localdomain/36547] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 13:20:15,518 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 13:20:15,518 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 13:20:15,518 INFO [Listener at localhost.localdomain/36547] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 13:20:15,523 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,528 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@32f5c360{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/jetty-0_0_0_0-37441-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8679942632357827882/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 13:20:15,529 INFO [Listener at localhost.localdomain/36547] server.AbstractConnector(333): Started ServerConnector@202ec085{HTTP/1.1, (http/1.1)}{0.0.0.0:37441} 2023-07-21 13:20:15,529 INFO [Listener at localhost.localdomain/36547] server.Server(415): Started @9345ms 2023-07-21 13:20:15,546 INFO [Listener at localhost.localdomain/36547] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-07-21 13:20:15,546 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,546 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,547 INFO [Listener at localhost.localdomain/36547] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 13:20:15,547 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,547 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 13:20:15,547 INFO [Listener at localhost.localdomain/36547] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 13:20:15,549 INFO [Listener at localhost.localdomain/36547] ipc.NettyRpcServer(120): Bind to /188.40.62.62:37511 2023-07-21 13:20:15,549 INFO [Listener at localhost.localdomain/36547] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 13:20:15,554 DEBUG [Listener at localhost.localdomain/36547] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 13:20:15,556 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:15,558 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:15,560 INFO [Listener at localhost.localdomain/36547] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37511 connecting to ZooKeeper ensemble=127.0.0.1:61652 2023-07-21 13:20:15,576 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:375110x0, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 13:20:15,581 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:375110x0, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 13:20:15,583 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:375110x0, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:15,584 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:375110x0, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 13:20:15,588 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37511-0x1018809df7a0002 connected 2023-07-21 13:20:15,593 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37511 2023-07-21 13:20:15,593 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37511 2023-07-21 13:20:15,594 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37511 2023-07-21 13:20:15,597 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37511 2023-07-21 13:20:15,602 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37511 2023-07-21 13:20:15,605 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 13:20:15,606 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 13:20:15,606 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 13:20:15,607 INFO [Listener at localhost.localdomain/36547] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 13:20:15,607 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 13:20:15,607 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 13:20:15,607 INFO [Listener at localhost.localdomain/36547] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 13:20:15,608 INFO [Listener at localhost.localdomain/36547] http.HttpServer(1146): Jetty bound to port 44937 2023-07-21 13:20:15,608 INFO [Listener at localhost.localdomain/36547] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 13:20:15,618 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,618 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@742fee4f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,AVAILABLE} 2023-07-21 13:20:15,619 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,619 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@9b82400{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 13:20:15,741 INFO [Listener at localhost.localdomain/36547] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 13:20:15,742 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 13:20:15,742 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 13:20:15,743 INFO [Listener at localhost.localdomain/36547] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-21 13:20:15,748 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,749 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@51fef4ca{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/jetty-0_0_0_0-44937-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1489679501858621283/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 13:20:15,750 INFO [Listener at localhost.localdomain/36547] server.AbstractConnector(333): Started ServerConnector@70d3ed6{HTTP/1.1, (http/1.1)}{0.0.0.0:44937} 2023-07-21 13:20:15,750 INFO [Listener at localhost.localdomain/36547] server.Server(415): Started @9566ms 2023-07-21 13:20:15,760 INFO [Listener at localhost.localdomain/36547] client.ConnectionUtils(127): regionserver/jenkins-hbase16:0 server-side Connection retries=45 2023-07-21 13:20:15,761 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,761 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,761 INFO [Listener at localhost.localdomain/36547] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-21 13:20:15,761 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-21 13:20:15,761 INFO [Listener at localhost.localdomain/36547] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-21 13:20:15,761 INFO [Listener at localhost.localdomain/36547] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-21 13:20:15,763 INFO [Listener at localhost.localdomain/36547] ipc.NettyRpcServer(120): Bind to /188.40.62.62:41329 2023-07-21 13:20:15,763 INFO [Listener at localhost.localdomain/36547] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-21 13:20:15,764 DEBUG [Listener at localhost.localdomain/36547] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-21 13:20:15,765 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:15,766 INFO [Listener at localhost.localdomain/36547] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:15,768 INFO [Listener at localhost.localdomain/36547] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41329 connecting to ZooKeeper ensemble=127.0.0.1:61652 2023-07-21 13:20:15,776 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:413290x0, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-21 13:20:15,777 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:413290x0, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 13:20:15,778 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41329-0x1018809df7a0003 connected 2023-07-21 13:20:15,778 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:15,779 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ZKUtil(164): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-21 13:20:15,780 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41329 2023-07-21 13:20:15,780 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41329 2023-07-21 13:20:15,782 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41329 2023-07-21 13:20:15,783 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41329 2023-07-21 13:20:15,783 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41329 2023-07-21 13:20:15,786 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-21 13:20:15,786 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-21 13:20:15,786 INFO [Listener at localhost.localdomain/36547] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-21 13:20:15,787 INFO [Listener at localhost.localdomain/36547] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-21 13:20:15,787 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-21 13:20:15,787 INFO [Listener at localhost.localdomain/36547] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-21 13:20:15,787 INFO [Listener at localhost.localdomain/36547] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-21 13:20:15,788 INFO [Listener at localhost.localdomain/36547] http.HttpServer(1146): Jetty bound to port 44301 2023-07-21 13:20:15,788 INFO [Listener at localhost.localdomain/36547] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 13:20:15,789 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,790 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5cae4174{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,AVAILABLE} 2023-07-21 13:20:15,790 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,790 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a52f866{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-21 13:20:15,887 INFO [Listener at localhost.localdomain/36547] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-21 13:20:15,888 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-21 13:20:15,888 INFO [Listener at localhost.localdomain/36547] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-21 13:20:15,888 INFO [Listener at localhost.localdomain/36547] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-21 13:20:15,890 INFO [Listener at localhost.localdomain/36547] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-21 13:20:15,891 INFO [Listener at localhost.localdomain/36547] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@675ed0da{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/java.io.tmpdir/jetty-0_0_0_0-44301-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5661603390576253356/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 13:20:15,892 INFO [Listener at localhost.localdomain/36547] server.AbstractConnector(333): Started ServerConnector@c8ff561{HTTP/1.1, (http/1.1)}{0.0.0.0:44301} 2023-07-21 13:20:15,892 INFO [Listener at localhost.localdomain/36547] server.Server(415): Started @9708ms 2023-07-21 13:20:15,901 INFO [master/jenkins-hbase16:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-21 13:20:15,940 INFO [master/jenkins-hbase16:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@2cb0d232{HTTP/1.1, (http/1.1)}{0.0.0.0:45543} 2023-07-21 13:20:15,940 INFO [master/jenkins-hbase16:0:becomeActiveMaster] server.Server(415): Started @9756ms 2023-07-21 13:20:15,940 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:15,951 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 13:20:15,953 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:15,976 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 13:20:15,976 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 13:20:15,976 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 13:20:15,976 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-21 13:20:15,977 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:15,977 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 13:20:15,979 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase16.apache.org,40019,1689945613483 from backup master directory 2023-07-21 13:20:15,979 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-21 13:20:15,992 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:15,993 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-21 13:20:15,994 WARN [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 13:20:15,994 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:15,998 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-21 13:20:16,000 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-21 13:20:16,109 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/hbase.id with ID: 54d20de7-ab81-4bb1-8a06-02deb310f0f5 2023-07-21 13:20:16,148 INFO [master/jenkins-hbase16:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-21 13:20:16,176 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:16,228 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2c527739 to 127.0.0.1:61652 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 13:20:16,261 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65f4d17, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 13:20:16,286 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 13:20:16,288 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-21 13:20:16,307 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-21 13:20:16,307 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-21 13:20:16,309 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 13:20:16,313 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-21 13:20:16,315 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 13:20:16,353 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store-tmp 2023-07-21 13:20:16,407 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:16,408 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 13:20:16,408 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:16,408 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:16,408 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 13:20:16,408 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:16,408 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:16,408 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 13:20:16,410 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/WALs/jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:16,432 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C40019%2C1689945613483, suffix=, logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/WALs/jenkins-hbase16.apache.org,40019,1689945613483, archiveDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/oldWALs, maxLogs=10 2023-07-21 13:20:16,490 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK] 2023-07-21 13:20:16,491 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK] 2023-07-21 13:20:16,492 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK] 2023-07-21 13:20:16,501 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-21 13:20:16,591 INFO [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/WALs/jenkins-hbase16.apache.org,40019,1689945613483/jenkins-hbase16.apache.org%2C40019%2C1689945613483.1689945616442 2023-07-21 13:20:16,595 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK], DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK], DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK]] 2023-07-21 13:20:16,596 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-21 13:20:16,596 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:16,602 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 13:20:16,603 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 13:20:16,696 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-21 13:20:16,704 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-21 13:20:16,739 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-21 13:20:16,753 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:16,760 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 13:20:16,762 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-21 13:20:16,781 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-21 13:20:16,786 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 13:20:16,787 INFO [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9831539040, jitterRate=-0.084366574883461}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 13:20:16,787 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 13:20:16,788 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-21 13:20:16,815 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-21 13:20:16,815 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-21 13:20:16,818 INFO [master/jenkins-hbase16:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-21 13:20:16,820 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-21 13:20:16,855 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 34 msec 2023-07-21 13:20:16,855 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-21 13:20:16,886 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-21 13:20:16,893 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-21 13:20:16,918 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-21 13:20:16,924 INFO [master/jenkins-hbase16:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-21 13:20:16,926 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-21 13:20:16,932 INFO [master/jenkins-hbase16:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-21 13:20:16,937 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-21 13:20:16,942 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:16,944 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-21 13:20:16,945 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-21 13:20:16,959 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-21 13:20:16,967 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:16,968 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:16,968 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:16,968 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:16,967 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:16,970 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase16.apache.org,40019,1689945613483, sessionid=0x1018809df7a0000, setting cluster-up flag (Was=false) 2023-07-21 13:20:17,001 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:17,026 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-21 13:20:17,027 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:17,042 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:17,067 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-21 13:20:17,069 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:17,073 WARN [master/jenkins-hbase16:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.hbase-snapshot/.tmp 2023-07-21 13:20:17,101 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(951): ClusterId : 54d20de7-ab81-4bb1-8a06-02deb310f0f5 2023-07-21 13:20:17,104 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(951): ClusterId : 54d20de7-ab81-4bb1-8a06-02deb310f0f5 2023-07-21 13:20:17,101 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(951): ClusterId : 54d20de7-ab81-4bb1-8a06-02deb310f0f5 2023-07-21 13:20:17,112 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 13:20:17,113 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 13:20:17,112 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-21 13:20:17,136 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 13:20:17,137 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 13:20:17,136 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-21 13:20:17,137 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 13:20:17,137 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 13:20:17,137 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-21 13:20:17,160 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 13:20:17,160 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 13:20:17,162 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-21 13:20:17,162 DEBUG [RS:1;jenkins-hbase16:37511] zookeeper.ReadOnlyZKClient(139): Connect 0x153e3e98 to 127.0.0.1:61652 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 13:20:17,169 DEBUG [RS:0;jenkins-hbase16:39771] zookeeper.ReadOnlyZKClient(139): Connect 0x3f21d242 to 127.0.0.1:61652 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 13:20:17,169 DEBUG [RS:2;jenkins-hbase16:41329] zookeeper.ReadOnlyZKClient(139): Connect 0x5a8a7e5a to 127.0.0.1:61652 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 13:20:17,237 DEBUG [RS:1;jenkins-hbase16:37511] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@404ad0dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 13:20:17,237 DEBUG [RS:2;jenkins-hbase16:41329] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ac9a4f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 13:20:17,238 DEBUG [RS:1;jenkins-hbase16:37511] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6d6f176d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-07-21 13:20:17,238 DEBUG [RS:2;jenkins-hbase16:41329] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22f1dcf1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-07-21 13:20:17,238 DEBUG [RS:0;jenkins-hbase16:39771] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f545a0e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 13:20:17,239 DEBUG [RS:0;jenkins-hbase16:39771] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17be002, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-07-21 13:20:17,263 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase16:37511 2023-07-21 13:20:17,265 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase16:39771 2023-07-21 13:20:17,266 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase16:41329 2023-07-21 13:20:17,271 INFO [RS:2;jenkins-hbase16:41329] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 13:20:17,277 INFO [RS:2;jenkins-hbase16:41329] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 13:20:17,277 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 13:20:17,281 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase16.apache.org,40019,1689945613483 with isa=jenkins-hbase16.apache.org/188.40.62.62:41329, startcode=1689945615760 2023-07-21 13:20:17,271 INFO [RS:0;jenkins-hbase16:39771] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 13:20:17,287 INFO [RS:0;jenkins-hbase16:39771] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 13:20:17,287 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 13:20:17,271 INFO [RS:1;jenkins-hbase16:37511] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-21 13:20:17,287 INFO [RS:1;jenkins-hbase16:37511] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-21 13:20:17,287 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1022): About to register with Master. 2023-07-21 13:20:17,288 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase16.apache.org,40019,1689945613483 with isa=jenkins-hbase16.apache.org/188.40.62.62:39771, startcode=1689945615330 2023-07-21 13:20:17,289 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase16.apache.org,40019,1689945613483 with isa=jenkins-hbase16.apache.org/188.40.62.62:37511, startcode=1689945615545 2023-07-21 13:20:17,300 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-21 13:20:17,301 DEBUG [RS:2;jenkins-hbase16:41329] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 13:20:17,301 DEBUG [RS:1;jenkins-hbase16:37511] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 13:20:17,301 DEBUG [RS:0;jenkins-hbase16:39771] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-21 13:20:17,314 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-07-21 13:20:17,314 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-07-21 13:20:17,314 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-07-21 13:20:17,314 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=5, maxPoolSize=5 2023-07-21 13:20:17,314 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase16:0, corePoolSize=10, maxPoolSize=10 2023-07-21 13:20:17,315 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,315 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-07-21 13:20:17,315 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,330 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689945647329 2023-07-21 13:20:17,333 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-21 13:20:17,337 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-21 13:20:17,357 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-21 13:20:17,357 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-21 13:20:17,358 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-21 13:20:17,358 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-21 13:20:17,363 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,366 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-21 13:20:17,377 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-21 13:20:17,378 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-21 13:20:17,378 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 13:20:17,379 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-21 13:20:17,385 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:41911, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 13:20:17,386 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 13:20:17,385 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:60937, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 13:20:17,386 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-21 13:20:17,392 INFO [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-21 13:20:17,395 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51833, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-21 13:20:17,398 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:17,406 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1689945617398,5,FailOnTimeoutGroup] 2023-07-21 13:20:17,407 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1689945617407,5,FailOnTimeoutGroup] 2023-07-21 13:20:17,407 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:17,407 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,410 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-21 13:20:17,410 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40019] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:17,420 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,420 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,450 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 13:20:17,451 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 13:20:17,451 WARN [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 13:20:17,450 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(2830): Master is not running yet 2023-07-21 13:20:17,451 WARN [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 13:20:17,452 WARN [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-21 13:20:17,457 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 13:20:17,458 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-21 13:20:17,459 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff 2023-07-21 13:20:17,511 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:17,516 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 13:20:17,524 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/info 2023-07-21 13:20:17,527 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 13:20:17,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:17,529 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 13:20:17,533 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/rep_barrier 2023-07-21 13:20:17,533 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 13:20:17,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:17,535 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 13:20:17,537 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/table 2023-07-21 13:20:17,538 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 13:20:17,539 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:17,541 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740 2023-07-21 13:20:17,543 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740 2023-07-21 13:20:17,551 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-07-21 13:20:17,552 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase16.apache.org,40019,1689945613483 with isa=jenkins-hbase16.apache.org/188.40.62.62:41329, startcode=1689945615760 2023-07-21 13:20:17,553 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase16.apache.org,40019,1689945613483 with isa=jenkins-hbase16.apache.org/188.40.62.62:37511, startcode=1689945615545 2023-07-21 13:20:17,555 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase16.apache.org,40019,1689945613483 with isa=jenkins-hbase16.apache.org/188.40.62.62:39771, startcode=1689945615330 2023-07-21 13:20:17,557 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 13:20:17,563 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40019] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,570 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 13:20:17,571 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40019] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,572 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11369398880, jitterRate=0.05885778367519379}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-07-21 13:20:17,572 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff 2023-07-21 13:20:17,572 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 13:20:17,572 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43421 2023-07-21 13:20:17,573 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 13:20:17,573 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35009 2023-07-21 13:20:17,573 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 13:20:17,573 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40019] master.ServerManager(394): Registering regionserver=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,573 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 13:20:17,573 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 13:20:17,573 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 13:20:17,582 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff 2023-07-21 13:20:17,586 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff 2023-07-21 13:20:17,586 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43421 2023-07-21 13:20:17,586 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35009 2023-07-21 13:20:17,586 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 13:20:17,586 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 13:20:17,586 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:43421 2023-07-21 13:20:17,587 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35009 2023-07-21 13:20:17,593 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-21 13:20:17,593 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-21 13:20:17,601 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 13:20:17,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-21 13:20:17,621 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-21 13:20:17,625 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-21 13:20:17,631 DEBUG [RS:2;jenkins-hbase16:41329] zookeeper.ZKUtil(162): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,631 WARN [RS:2;jenkins-hbase16:41329] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 13:20:17,631 INFO [RS:2;jenkins-hbase16:41329] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 13:20:17,631 DEBUG [RS:0;jenkins-hbase16:39771] zookeeper.ZKUtil(162): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,631 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,631 WARN [RS:0;jenkins-hbase16:39771] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 13:20:17,632 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,39771,1689945615330] 2023-07-21 13:20:17,632 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,37511,1689945615545] 2023-07-21 13:20:17,632 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase16.apache.org,41329,1689945615760] 2023-07-21 13:20:17,632 INFO [RS:0;jenkins-hbase16:39771] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 13:20:17,633 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,634 DEBUG [RS:1;jenkins-hbase16:37511] zookeeper.ZKUtil(162): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,634 WARN [RS:1;jenkins-hbase16:37511] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-21 13:20:17,634 INFO [RS:1;jenkins-hbase16:37511] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 13:20:17,634 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,650 DEBUG [RS:1;jenkins-hbase16:37511] zookeeper.ZKUtil(162): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,650 DEBUG [RS:0;jenkins-hbase16:39771] zookeeper.ZKUtil(162): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,651 DEBUG [RS:2;jenkins-hbase16:41329] zookeeper.ZKUtil(162): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,651 DEBUG [RS:0;jenkins-hbase16:39771] zookeeper.ZKUtil(162): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,651 DEBUG [RS:2;jenkins-hbase16:41329] zookeeper.ZKUtil(162): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,652 DEBUG [RS:2;jenkins-hbase16:41329] zookeeper.ZKUtil(162): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,652 DEBUG [RS:0;jenkins-hbase16:39771] zookeeper.ZKUtil(162): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,654 DEBUG [RS:1;jenkins-hbase16:37511] zookeeper.ZKUtil(162): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,663 DEBUG [RS:1;jenkins-hbase16:37511] zookeeper.ZKUtil(162): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,668 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 13:20:17,669 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 13:20:17,670 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-21 13:20:17,683 INFO [RS:2;jenkins-hbase16:41329] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 13:20:17,684 INFO [RS:1;jenkins-hbase16:37511] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 13:20:17,683 INFO [RS:0;jenkins-hbase16:39771] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-21 13:20:17,710 INFO [RS:1;jenkins-hbase16:37511] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 13:20:17,710 INFO [RS:0;jenkins-hbase16:39771] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 13:20:17,710 INFO [RS:2;jenkins-hbase16:41329] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-21 13:20:17,717 INFO [RS:2;jenkins-hbase16:41329] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 13:20:17,717 INFO [RS:2;jenkins-hbase16:41329] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,718 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 13:20:17,717 INFO [RS:1;jenkins-hbase16:37511] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 13:20:17,717 INFO [RS:0;jenkins-hbase16:39771] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 13:20:17,723 INFO [RS:0;jenkins-hbase16:39771] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,722 INFO [RS:1;jenkins-hbase16:37511] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,724 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 13:20:17,724 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-21 13:20:17,732 INFO [RS:0;jenkins-hbase16:39771] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,732 INFO [RS:2;jenkins-hbase16:41329] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,732 INFO [RS:1;jenkins-hbase16:37511] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,732 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,733 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-07-21 13:20:17,734 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-07-21 13:20:17,734 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,734 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,734 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,734 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,734 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,734 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,734 DEBUG [RS:0;jenkins-hbase16:39771] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,734 DEBUG [RS:1;jenkins-hbase16:37511] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,737 INFO [RS:0;jenkins-hbase16:39771] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,737 INFO [RS:0;jenkins-hbase16:39771] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,737 INFO [RS:0;jenkins-hbase16:39771] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,732 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,739 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,739 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,739 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,740 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,740 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase16:0, corePoolSize=2, maxPoolSize=2 2023-07-21 13:20:17,740 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,740 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,740 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,740 DEBUG [RS:2;jenkins-hbase16:41329] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase16:0, corePoolSize=1, maxPoolSize=1 2023-07-21 13:20:17,743 INFO [RS:1;jenkins-hbase16:37511] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,743 INFO [RS:1;jenkins-hbase16:37511] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,743 INFO [RS:1;jenkins-hbase16:37511] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,755 INFO [RS:2;jenkins-hbase16:41329] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,755 INFO [RS:2;jenkins-hbase16:41329] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,755 INFO [RS:2;jenkins-hbase16:41329] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,764 INFO [RS:0;jenkins-hbase16:39771] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 13:20:17,766 INFO [RS:1;jenkins-hbase16:37511] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 13:20:17,769 INFO [RS:1;jenkins-hbase16:37511] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,37511,1689945615545-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,769 INFO [RS:0;jenkins-hbase16:39771] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,39771,1689945615330-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,771 INFO [RS:2;jenkins-hbase16:41329] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-21 13:20:17,771 INFO [RS:2;jenkins-hbase16:41329] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,41329,1689945615760-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:17,780 DEBUG [jenkins-hbase16:40019] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-21 13:20:17,787 DEBUG [jenkins-hbase16:40019] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase16.apache.org=0} racks are {/default-rack=0} 2023-07-21 13:20:17,793 DEBUG [jenkins-hbase16:40019] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 13:20:17,793 DEBUG [jenkins-hbase16:40019] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 13:20:17,793 DEBUG [jenkins-hbase16:40019] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 13:20:17,793 DEBUG [jenkins-hbase16:40019] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 13:20:17,797 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,37511,1689945615545, state=OPENING 2023-07-21 13:20:17,797 INFO [RS:1;jenkins-hbase16:37511] regionserver.Replication(203): jenkins-hbase16.apache.org,37511,1689945615545 started 2023-07-21 13:20:17,798 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,37511,1689945615545, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:37511, sessionid=0x1018809df7a0002 2023-07-21 13:20:17,798 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 13:20:17,798 DEBUG [RS:1;jenkins-hbase16:37511] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,798 DEBUG [RS:1;jenkins-hbase16:37511] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,37511,1689945615545' 2023-07-21 13:20:17,798 DEBUG [RS:1;jenkins-hbase16:37511] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 13:20:17,799 DEBUG [RS:1;jenkins-hbase16:37511] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 13:20:17,800 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 13:20:17,800 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 13:20:17,800 DEBUG [RS:1;jenkins-hbase16:37511] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:17,800 DEBUG [RS:1;jenkins-hbase16:37511] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,37511,1689945615545' 2023-07-21 13:20:17,800 DEBUG [RS:1;jenkins-hbase16:37511] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 13:20:17,800 DEBUG [RS:1;jenkins-hbase16:37511] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 13:20:17,801 DEBUG [RS:1;jenkins-hbase16:37511] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 13:20:17,801 INFO [RS:1;jenkins-hbase16:37511] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 13:20:17,801 INFO [RS:1;jenkins-hbase16:37511] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 13:20:17,801 INFO [RS:2;jenkins-hbase16:41329] regionserver.Replication(203): jenkins-hbase16.apache.org,41329,1689945615760 started 2023-07-21 13:20:17,801 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,41329,1689945615760, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:41329, sessionid=0x1018809df7a0003 2023-07-21 13:20:17,802 INFO [RS:0;jenkins-hbase16:39771] regionserver.Replication(203): jenkins-hbase16.apache.org,39771,1689945615330 started 2023-07-21 13:20:17,802 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 13:20:17,802 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1637): Serving as jenkins-hbase16.apache.org,39771,1689945615330, RpcServer on jenkins-hbase16.apache.org/188.40.62.62:39771, sessionid=0x1018809df7a0001 2023-07-21 13:20:17,802 DEBUG [RS:2;jenkins-hbase16:41329] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,803 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-21 13:20:17,803 DEBUG [RS:2;jenkins-hbase16:41329] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,41329,1689945615760' 2023-07-21 13:20:17,804 DEBUG [RS:0;jenkins-hbase16:39771] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,804 DEBUG [RS:2;jenkins-hbase16:41329] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 13:20:17,804 DEBUG [RS:0;jenkins-hbase16:39771] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,39771,1689945615330' 2023-07-21 13:20:17,804 DEBUG [RS:0;jenkins-hbase16:39771] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-21 13:20:17,804 DEBUG [RS:0;jenkins-hbase16:39771] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 13:20:17,804 DEBUG [RS:2;jenkins-hbase16:41329] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-21 13:20:17,805 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 13:20:17,805 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 13:20:17,805 DEBUG [RS:0;jenkins-hbase16:39771] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:17,805 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-21 13:20:17,805 DEBUG [RS:0;jenkins-hbase16:39771] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,39771,1689945615330' 2023-07-21 13:20:17,805 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-21 13:20:17,806 DEBUG [RS:2;jenkins-hbase16:41329] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:17,806 DEBUG [RS:0;jenkins-hbase16:39771] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 13:20:17,806 DEBUG [RS:2;jenkins-hbase16:41329] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase16.apache.org,41329,1689945615760' 2023-07-21 13:20:17,806 DEBUG [RS:2;jenkins-hbase16:41329] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-21 13:20:17,817 DEBUG [RS:2;jenkins-hbase16:41329] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 13:20:17,817 DEBUG [RS:0;jenkins-hbase16:39771] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-21 13:20:17,817 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-21 13:20:17,818 DEBUG [RS:2;jenkins-hbase16:41329] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 13:20:17,818 INFO [RS:2;jenkins-hbase16:41329] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 13:20:17,819 DEBUG [RS:0;jenkins-hbase16:39771] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-21 13:20:17,819 INFO [RS:2;jenkins-hbase16:41329] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 13:20:17,819 INFO [RS:0;jenkins-hbase16:39771] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-21 13:20:17,819 INFO [RS:0;jenkins-hbase16:39771] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-21 13:20:17,826 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:17,826 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 13:20:17,830 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,37511,1689945615545}] 2023-07-21 13:20:17,917 INFO [RS:1;jenkins-hbase16:37511] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C37511%2C1689945615545, suffix=, logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,37511,1689945615545, archiveDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs, maxLogs=32 2023-07-21 13:20:17,922 INFO [RS:0;jenkins-hbase16:39771] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C39771%2C1689945615330, suffix=, logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,39771,1689945615330, archiveDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs, maxLogs=32 2023-07-21 13:20:17,927 INFO [RS:2;jenkins-hbase16:41329] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C41329%2C1689945615760, suffix=, logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,41329,1689945615760, archiveDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs, maxLogs=32 2023-07-21 13:20:17,943 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK] 2023-07-21 13:20:17,945 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK] 2023-07-21 13:20:17,944 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK] 2023-07-21 13:20:17,953 INFO [RS:1;jenkins-hbase16:37511] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,37511,1689945615545/jenkins-hbase16.apache.org%2C37511%2C1689945615545.1689945617919 2023-07-21 13:20:17,953 DEBUG [RS:1;jenkins-hbase16:37511] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK], DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK], DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK]] 2023-07-21 13:20:17,976 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK] 2023-07-21 13:20:17,979 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK] 2023-07-21 13:20:17,983 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK] 2023-07-21 13:20:17,988 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK] 2023-07-21 13:20:17,988 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK] 2023-07-21 13:20:17,989 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK] 2023-07-21 13:20:17,995 INFO [RS:0;jenkins-hbase16:39771] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,39771,1689945615330/jenkins-hbase16.apache.org%2C39771%2C1689945615330.1689945617925 2023-07-21 13:20:17,997 DEBUG [RS:0;jenkins-hbase16:39771] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK], DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK], DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK]] 2023-07-21 13:20:17,998 INFO [RS:2;jenkins-hbase16:41329] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,41329,1689945615760/jenkins-hbase16.apache.org%2C41329%2C1689945615760.1689945617929 2023-07-21 13:20:17,998 DEBUG [RS:2;jenkins-hbase16:41329] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK], DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK], DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK]] 2023-07-21 13:20:18,019 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:18,021 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 13:20:18,024 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51590, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 13:20:18,035 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-21 13:20:18,036 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-21 13:20:18,040 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase16.apache.org%2C37511%2C1689945615545.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,37511,1689945615545, archiveDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs, maxLogs=32 2023-07-21 13:20:18,060 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK] 2023-07-21 13:20:18,062 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK] 2023-07-21 13:20:18,062 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK] 2023-07-21 13:20:18,076 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/WALs/jenkins-hbase16.apache.org,37511,1689945615545/jenkins-hbase16.apache.org%2C37511%2C1689945615545.meta.1689945618041.meta 2023-07-21 13:20:18,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33219,DS-1a78b274-4530-4d2f-8ed7-d62ddf857176,DISK], DatanodeInfoWithStorage[127.0.0.1:34567,DS-1f748205-1567-42fb-bfb1-399a5d715114,DISK], DatanodeInfoWithStorage[127.0.0.1:44461,DS-e277dd3e-c5f5-4531-aaa7-60b50737ae77,DISK]] 2023-07-21 13:20:18,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-21 13:20:18,081 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-21 13:20:18,105 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-21 13:20:18,109 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-21 13:20:18,115 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-21 13:20:18,115 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:18,116 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-21 13:20:18,116 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-21 13:20:18,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-21 13:20:18,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/info 2023-07-21 13:20:18,122 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/info 2023-07-21 13:20:18,123 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-21 13:20:18,125 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:18,125 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-21 13:20:18,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/rep_barrier 2023-07-21 13:20:18,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/rep_barrier 2023-07-21 13:20:18,127 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-21 13:20:18,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:18,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-21 13:20:18,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/table 2023-07-21 13:20:18,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/table 2023-07-21 13:20:18,132 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-21 13:20:18,133 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:18,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740 2023-07-21 13:20:18,138 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740 2023-07-21 13:20:18,141 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-07-21 13:20:18,145 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-21 13:20:18,146 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10150312640, jitterRate=-0.05467846989631653}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-07-21 13:20:18,147 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-21 13:20:18,157 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689945618012 2023-07-21 13:20:18,177 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-21 13:20:18,178 INFO [RS_OPEN_META-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-21 13:20:18,179 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase16.apache.org,37511,1689945615545, state=OPEN 2023-07-21 13:20:18,184 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-21 13:20:18,184 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-21 13:20:18,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-21 13:20:18,189 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase16.apache.org,37511,1689945615545 in 354 msec 2023-07-21 13:20:18,199 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-21 13:20:18,199 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 587 msec 2023-07-21 13:20:18,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0460 sec 2023-07-21 13:20:18,209 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689945618209, completionTime=-1 2023-07-21 13:20:18,209 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-21 13:20:18,209 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-21 13:20:18,282 DEBUG [hconnection-0x34d99ee6-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 13:20:18,285 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51606, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 13:20:18,305 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-21 13:20:18,305 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689945678305 2023-07-21 13:20:18,305 INFO [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689945738305 2023-07-21 13:20:18,305 INFO [master/jenkins-hbase16:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 96 msec 2023-07-21 13:20:18,346 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,40019,1689945613483-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:18,346 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,40019,1689945613483-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:18,346 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,40019,1689945613483-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:18,348 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase16:40019, period=300000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:18,349 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:18,357 DEBUG [master/jenkins-hbase16:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-21 13:20:18,364 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-21 13:20:18,365 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-21 13:20:18,375 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-21 13:20:18,377 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 13:20:18,381 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 13:20:18,405 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,408 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61 empty. 2023-07-21 13:20:18,409 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,409 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-21 13:20:18,463 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-21 13:20:18,465 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6e3bc236743db613771f2e95c424ea61, NAME => 'hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp 2023-07-21 13:20:18,486 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:18,486 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 6e3bc236743db613771f2e95c424ea61, disabling compactions & flushes 2023-07-21 13:20:18,486 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:18,487 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:18,487 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. after waiting 0 ms 2023-07-21 13:20:18,487 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:18,487 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:18,487 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 6e3bc236743db613771f2e95c424ea61: 2023-07-21 13:20:18,492 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 13:20:18,510 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689945618496"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689945618496"}]},"ts":"1689945618496"} 2023-07-21 13:20:18,547 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 13:20:18,549 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 13:20:18,554 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689945618549"}]},"ts":"1689945618549"} 2023-07-21 13:20:18,559 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-21 13:20:18,576 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase16.apache.org=0} racks are {/default-rack=0} 2023-07-21 13:20:18,577 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 13:20:18,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 13:20:18,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 13:20:18,578 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 13:20:18,580 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6e3bc236743db613771f2e95c424ea61, ASSIGN}] 2023-07-21 13:20:18,583 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=6e3bc236743db613771f2e95c424ea61, ASSIGN 2023-07-21 13:20:18,585 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=6e3bc236743db613771f2e95c424ea61, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,41329,1689945615760; forceNewPlan=false, retain=false 2023-07-21 13:20:18,740 INFO [jenkins-hbase16:40019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 13:20:18,741 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6e3bc236743db613771f2e95c424ea61, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:18,741 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689945618741"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689945618741"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689945618741"}]},"ts":"1689945618741"} 2023-07-21 13:20:18,748 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 6e3bc236743db613771f2e95c424ea61, server=jenkins-hbase16.apache.org,41329,1689945615760}] 2023-07-21 13:20:18,902 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:18,903 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 13:20:18,907 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:47410, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 13:20:18,915 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:18,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6e3bc236743db613771f2e95c424ea61, NAME => 'hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.', STARTKEY => '', ENDKEY => ''} 2023-07-21 13:20:18,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:18,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for 6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,916 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for 6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,920 INFO [StoreOpener-6e3bc236743db613771f2e95c424ea61-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,922 DEBUG [StoreOpener-6e3bc236743db613771f2e95c424ea61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/info 2023-07-21 13:20:18,922 DEBUG [StoreOpener-6e3bc236743db613771f2e95c424ea61-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/info 2023-07-21 13:20:18,923 INFO [StoreOpener-6e3bc236743db613771f2e95c424ea61-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6e3bc236743db613771f2e95c424ea61 columnFamilyName info 2023-07-21 13:20:18,924 INFO [StoreOpener-6e3bc236743db613771f2e95c424ea61-1] regionserver.HStore(310): Store=6e3bc236743db613771f2e95c424ea61/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:18,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for 6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:18,935 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 13:20:18,936 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened 6e3bc236743db613771f2e95c424ea61; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9997278080, jitterRate=-0.06893092393875122}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 13:20:18,936 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for 6e3bc236743db613771f2e95c424ea61: 2023-07-21 13:20:18,938 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61., pid=6, masterSystemTime=1689945618902 2023-07-21 13:20:18,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:18,944 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:18,947 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=6e3bc236743db613771f2e95c424ea61, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:18,947 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689945618946"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689945618946"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689945618946"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689945618946"}]},"ts":"1689945618946"} 2023-07-21 13:20:18,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-21 13:20:18,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 6e3bc236743db613771f2e95c424ea61, server=jenkins-hbase16.apache.org,41329,1689945615760 in 202 msec 2023-07-21 13:20:18,958 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-21 13:20:18,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=6e3bc236743db613771f2e95c424ea61, ASSIGN in 374 msec 2023-07-21 13:20:18,960 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 13:20:18,960 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689945618960"}]},"ts":"1689945618960"} 2023-07-21 13:20:18,963 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-21 13:20:19,002 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-21 13:20:19,002 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 13:20:19,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 637 msec 2023-07-21 13:20:19,009 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-21 13:20:19,009 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:19,037 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 13:20:19,039 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:47424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 13:20:19,053 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-21 13:20:19,155 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 13:20:19,173 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 127 msec 2023-07-21 13:20:19,179 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-21 13:20:19,200 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-21 13:20:19,214 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 36 msec 2023-07-21 13:20:19,242 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-21 13:20:19,259 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-21 13:20:19,259 INFO [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.265sec 2023-07-21 13:20:19,262 INFO [master/jenkins-hbase16:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-21 13:20:19,264 INFO [master/jenkins-hbase16:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-21 13:20:19,264 INFO [master/jenkins-hbase16:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-21 13:20:19,267 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,40019,1689945613483-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-21 13:20:19,268 INFO [master/jenkins-hbase16:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase16.apache.org,40019,1689945613483-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-21 13:20:19,276 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-21 13:20:19,308 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ReadOnlyZKClient(139): Connect 0x7ce7db44 to 127.0.0.1:61652 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-21 13:20:19,318 DEBUG [Listener at localhost.localdomain/36547] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64a79bb3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-21 13:20:19,334 DEBUG [hconnection-0x64d55ac7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 13:20:19,346 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:51616, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 13:20:19,359 INFO [Listener at localhost.localdomain/36547] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:19,371 DEBUG [Listener at localhost.localdomain/36547] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-21 13:20:19,374 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:56606, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-21 13:20:19,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40019] master.HMaster$4(2112): Client=jenkins//188.40.62.62 create 'TestCP', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver|1073741823|'}}, {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-21 13:20:19,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40019] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestCP 2023-07-21 13:20:19,399 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_PRE_OPERATION 2023-07-21 13:20:19,403 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-21 13:20:19,408 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40019] master.MasterRpcServices(700): Client=jenkins//188.40.62.62 procedure request for creating table: namespace: "default" qualifier: "TestCP" procId is: 9 2023-07-21 13:20:19,409 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f empty. 2023-07-21 13:20:19,412 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,412 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestCP regions 2023-07-21 13:20:19,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-21 13:20:19,457 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp/data/default/TestCP/.tabledesc/.tableinfo.0000000001 2023-07-21 13:20:19,459 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(7675): creating {ENCODED => fb4efbfaac030a4093735b39d65fef7f, NAME => 'TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestCP', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver|1073741823|'}}, {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/.tmp 2023-07-21 13:20:19,491 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(866): Instantiated TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:19,491 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1604): Closing fb4efbfaac030a4093735b39d65fef7f, disabling compactions & flushes 2023-07-21 13:20:19,491 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1626): Closing region TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:19,492 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:19,492 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1714): Acquired close lock on TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. after waiting 0 ms 2023-07-21 13:20:19,492 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1724): Updates disabled for region TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:19,492 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1838): Closed TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:19,492 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1558): Region close journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:19,496 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_ADD_TO_META 2023-07-21 13:20:19,499 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1689945619498"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689945619498"}]},"ts":"1689945619498"} 2023-07-21 13:20:19,502 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-21 13:20:19,504 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-21 13:20:19,505 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestCP","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689945619504"}]},"ts":"1689945619504"} 2023-07-21 13:20:19,507 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestCP, state=ENABLING in hbase:meta 2023-07-21 13:20:19,526 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase16.apache.org=0} racks are {/default-rack=0} 2023-07-21 13:20:19,528 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-21 13:20:19,528 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-21 13:20:19,528 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-21 13:20:19,528 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-21 13:20:19,528 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestCP, region=fb4efbfaac030a4093735b39d65fef7f, ASSIGN}] 2023-07-21 13:20:19,531 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestCP, region=fb4efbfaac030a4093735b39d65fef7f, ASSIGN 2023-07-21 13:20:19,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-21 13:20:19,536 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestCP, region=fb4efbfaac030a4093735b39d65fef7f, ASSIGN; state=OFFLINE, location=jenkins-hbase16.apache.org,39771,1689945615330; forceNewPlan=false, retain=false 2023-07-21 13:20:19,687 INFO [jenkins-hbase16:40019] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-21 13:20:19,688 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=fb4efbfaac030a4093735b39d65fef7f, regionState=OPENING, regionLocation=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:19,688 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1689945619688"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689945619688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689945619688"}]},"ts":"1689945619688"} 2023-07-21 13:20:19,691 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330}] 2023-07-21 13:20:19,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-21 13:20:19,845 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:19,845 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-21 13:20:19,848 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:57158, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-21 13:20:19,854 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(130): Open TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:19,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb4efbfaac030a4093735b39d65fef7f, NAME => 'TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.', STARTKEY => '', ENDKEY => ''} 2023-07-21 13:20:19,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver with path null and priority 1073741823 2023-07-21 13:20:19,858 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver from HTD of TestCP successfully. 2023-07-21 13:20:19,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestCP fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(866): Instantiated TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-21 13:20:19,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7894): checking encryption for fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(7897): checking classloading for fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,862 INFO [StoreOpener-fb4efbfaac030a4093735b39d65fef7f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,865 DEBUG [StoreOpener-fb4efbfaac030a4093735b39d65fef7f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf 2023-07-21 13:20:19,865 DEBUG [StoreOpener-fb4efbfaac030a4093735b39d65fef7f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf 2023-07-21 13:20:19,865 INFO [StoreOpener-fb4efbfaac030a4093735b39d65fef7f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb4efbfaac030a4093735b39d65fef7f columnFamilyName cf 2023-07-21 13:20:19,866 INFO [StoreOpener-fb4efbfaac030a4093735b39d65fef7f-1] regionserver.HStore(310): Store=fb4efbfaac030a4093735b39d65fef7f/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-21 13:20:19,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1055): writing seq id for fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:19,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-21 13:20:19,878 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1072): Opened fb4efbfaac030a4093735b39d65fef7f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10517113280, jitterRate=-0.020517498254776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-21 13:20:19,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(965): Region open journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:19,879 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., pid=11, masterSystemTime=1689945619845 2023-07-21 13:20:19,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:19,884 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase16:0-0] handler.AssignRegionHandler(158): Opened TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:19,885 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=fb4efbfaac030a4093735b39d65fef7f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:19,886 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1689945619885"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689945619885"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689945619885"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689945619885"}]},"ts":"1689945619885"} 2023-07-21 13:20:19,892 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-21 13:20:19,892 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 in 198 msec 2023-07-21 13:20:19,895 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-21 13:20:19,895 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestCP, region=fb4efbfaac030a4093735b39d65fef7f, ASSIGN in 364 msec 2023-07-21 13:20:19,896 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-21 13:20:19,897 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestCP","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689945619896"}]},"ts":"1689945619896"} 2023-07-21 13:20:19,899 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=TestCP, state=ENABLED in hbase:meta 2023-07-21 13:20:19,910 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_POST_OPERATION 2023-07-21 13:20:19,914 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestCP in 518 msec 2023-07-21 13:20:20,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40019] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-21 13:20:20,036 INFO [Listener at localhost.localdomain/36547] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestCP, procId: 9 completed 2023-07-21 13:20:20,070 INFO [Listener at localhost.localdomain/36547] hbase.ResourceChecker(147): before: coprocessor.example.TestWriteHeavyIncrementObserver#test Thread=413, OpenFileDescriptor=724, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=175, AvailableMemoryMB=6606 2023-07-21 13:20:20,081 DEBUG [increment-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-21 13:20:20,085 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 188.40.62.62:57160, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-21 13:20:20,444 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:20,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:20,912 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=299 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/7cba18fd94164bb2a378395b4ce699fb 2023-07-21 13:20:21,102 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/7cba18fd94164bb2a378395b4ce699fb as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7cba18fd94164bb2a378395b4ce699fb 2023-07-21 13:20:21,121 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7cba18fd94164bb2a378395b4ce699fb, entries=2, sequenceid=299, filesize=4.8 K 2023-07-21 13:20:21,138 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=20.88 KB/21384 for fb4efbfaac030a4093735b39d65fef7f in 694ms, sequenceid=299, compaction requested=false 2023-07-21 13:20:21,140 DEBUG [MemStoreFlusher.0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestCP' 2023-07-21 13:20:21,144 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:21,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:21,149 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=21.45 KB heapSize=66.97 KB 2023-07-21 13:20:21,376 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.66 KB at sequenceid=610 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/9232723d51854d3290863c588c31958f 2023-07-21 13:20:21,421 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/9232723d51854d3290863c588c31958f as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9232723d51854d3290863c588c31958f 2023-07-21 13:20:21,438 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9232723d51854d3290863c588c31958f, entries=2, sequenceid=610, filesize=4.8 K 2023-07-21 13:20:21,442 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.66 KB/22176, heapSize ~67.61 KB/69232, currentSize=11.32 KB/11592 for fb4efbfaac030a4093735b39d65fef7f in 293ms, sequenceid=610, compaction requested=false 2023-07-21 13:20:21,442 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:21,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:21,622 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:21,795 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=907 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/2f9236475c084531be6ad588a5cf35dd 2023-07-21 13:20:21,856 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/2f9236475c084531be6ad588a5cf35dd as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2f9236475c084531be6ad588a5cf35dd 2023-07-21 13:20:21,878 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2f9236475c084531be6ad588a5cf35dd, entries=2, sequenceid=907, filesize=4.8 K 2023-07-21 13:20:21,881 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=11.74 KB/12024 for fb4efbfaac030a4093735b39d65fef7f in 259ms, sequenceid=907, compaction requested=true 2023-07-21 13:20:21,882 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:21,891 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:21,896 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:21,930 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14718 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:21,937 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:21,939 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:21,940 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7cba18fd94164bb2a378395b4ce699fb, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9232723d51854d3290863c588c31958f, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2f9236475c084531be6ad588a5cf35dd] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=14.4 K 2023-07-21 13:20:21,945 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 7cba18fd94164bb2a378395b4ce699fb, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=299, earliestPutTs=1730504314969088 2023-07-21 13:20:21,946 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 9232723d51854d3290863c588c31958f, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=610, earliestPutTs=1730504315335680 2023-07-21 13:20:21,953 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 2f9236475c084531be6ad588a5cf35dd, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=907, earliestPutTs=1730504316059648 2023-07-21 13:20:22,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:22,025 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:22,068 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#3 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:22,209 WARN [DataStreamer for file /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/ae107388ed984168bdd65afbc6b15d27] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-07-21 13:20:22,210 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.95 KB at sequenceid=1208 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/ae107388ed984168bdd65afbc6b15d27 2023-07-21 13:20:22,250 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/56824420e26a46bc9fbbafb5ab8a5412 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/56824420e26a46bc9fbbafb5ab8a5412 2023-07-21 13:20:22,250 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/ae107388ed984168bdd65afbc6b15d27 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ae107388ed984168bdd65afbc6b15d27 2023-07-21 13:20:22,299 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ae107388ed984168bdd65afbc6b15d27, entries=2, sequenceid=1208, filesize=4.8 K 2023-07-21 13:20:22,303 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.95 KB/21456, heapSize ~65.42 KB/66992, currentSize=9.42 KB/9648 for fb4efbfaac030a4093735b39d65fef7f in 278ms, sequenceid=1208, compaction requested=false 2023-07-21 13:20:22,303 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:22,330 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 56824420e26a46bc9fbbafb5ab8a5412(size=4.8 K), total size for store is 9.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:22,330 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:22,331 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945621884; duration=0sec 2023-07-21 13:20:22,336 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:22,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:22,435 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:22,524 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=1508 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/ebefd65f210b43769839bc9a6b75f310 2023-07-21 13:20:22,558 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/ebefd65f210b43769839bc9a6b75f310 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ebefd65f210b43769839bc9a6b75f310 2023-07-21 13:20:22,577 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ebefd65f210b43769839bc9a6b75f310, entries=2, sequenceid=1508, filesize=4.8 K 2023-07-21 13:20:22,579 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=10.13 KB/10368 for fb4efbfaac030a4093735b39d65fef7f in 143ms, sequenceid=1508, compaction requested=true 2023-07-21 13:20:22,579 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:22,580 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:22,580 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:22,583 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14759 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:22,583 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:22,583 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:22,583 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/56824420e26a46bc9fbbafb5ab8a5412, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ae107388ed984168bdd65afbc6b15d27, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ebefd65f210b43769839bc9a6b75f310] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=14.4 K 2023-07-21 13:20:22,585 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 56824420e26a46bc9fbbafb5ab8a5412, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=907, earliestPutTs=1730504314969088 2023-07-21 13:20:22,586 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting ae107388ed984168bdd65afbc6b15d27, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1208, earliestPutTs=1730504316540928 2023-07-21 13:20:22,587 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting ebefd65f210b43769839bc9a6b75f310, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1508, earliestPutTs=1730504316990464 2023-07-21 13:20:22,635 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#6 average throughput is 0.01 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:22,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:22,680 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:22,759 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/31ca0432c3404a3a8d991ecfc854c344 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/31ca0432c3404a3a8d991ecfc854c344 2023-07-21 13:20:22,796 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 31ca0432c3404a3a8d991ecfc854c344(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:22,796 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:22,796 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945622579; duration=0sec 2023-07-21 13:20:22,797 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:22,838 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.16 KB at sequenceid=1812 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/9babb9a93a7e48fdbf018cdfd7737e7a 2023-07-21 13:20:22,855 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/9babb9a93a7e48fdbf018cdfd7737e7a as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9babb9a93a7e48fdbf018cdfd7737e7a 2023-07-21 13:20:22,881 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9babb9a93a7e48fdbf018cdfd7737e7a, entries=2, sequenceid=1812, filesize=4.8 K 2023-07-21 13:20:22,888 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.16 KB/21672, heapSize ~66.08 KB/67664, currentSize=8.09 KB/8280 for fb4efbfaac030a4093735b39d65fef7f in 208ms, sequenceid=1812, compaction requested=false 2023-07-21 13:20:22,888 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:23,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:23,005 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:23,109 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=2110 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/440161ac16fd4376b15f16a565b5bbf6 2023-07-21 13:20:23,122 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/440161ac16fd4376b15f16a565b5bbf6 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/440161ac16fd4376b15f16a565b5bbf6 2023-07-21 13:20:23,207 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/440161ac16fd4376b15f16a565b5bbf6, entries=2, sequenceid=2110, filesize=4.8 K 2023-07-21 13:20:23,208 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=12.59 KB/12888 for fb4efbfaac030a4093735b39d65fef7f in 203ms, sequenceid=2110, compaction requested=true 2023-07-21 13:20:23,208 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:23,208 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-21 13:20:23,209 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:23,229 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14862 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:23,231 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:23,231 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:23,232 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/31ca0432c3404a3a8d991ecfc854c344, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9babb9a93a7e48fdbf018cdfd7737e7a, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/440161ac16fd4376b15f16a565b5bbf6] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=14.5 K 2023-07-21 13:20:23,234 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 31ca0432c3404a3a8d991ecfc854c344, keycount=2, bloomtype=ROW, size=4.9 K, encoding=NONE, compression=NONE, seqNum=1508, earliestPutTs=1730504314969088 2023-07-21 13:20:23,236 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 9babb9a93a7e48fdbf018cdfd7737e7a, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1812, earliestPutTs=1730504317373442 2023-07-21 13:20:23,240 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 440161ac16fd4376b15f16a565b5bbf6, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2110, earliestPutTs=1730504317634560 2023-07-21 13:20:23,289 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#9 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:23,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:23,326 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:23,540 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.60 KB at sequenceid=2406 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/bee41ff7fa614b239582b6473b3bb4ec 2023-07-21 13:20:23,555 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/bee41ff7fa614b239582b6473b3bb4ec as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/bee41ff7fa614b239582b6473b3bb4ec 2023-07-21 13:20:23,576 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/bee41ff7fa614b239582b6473b3bb4ec, entries=2, sequenceid=2406, filesize=4.8 K 2023-07-21 13:20:23,578 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.60 KB/21096, heapSize ~64.33 KB/65872, currentSize=14.63 KB/14976 for fb4efbfaac030a4093735b39d65fef7f in 254ms, sequenceid=2406, compaction requested=false 2023-07-21 13:20:23,578 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:23,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:23,649 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:23,755 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-21 13:20:23,773 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=2704 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/dfde4798c32947fda314a53e92572118 2023-07-21 13:20:23,794 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/dfde4798c32947fda314a53e92572118 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dfde4798c32947fda314a53e92572118 2023-07-21 13:20:23,813 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dfde4798c32947fda314a53e92572118, entries=2, sequenceid=2704, filesize=4.8 K 2023-07-21 13:20:23,816 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=7.88 KB/8064 for fb4efbfaac030a4093735b39d65fef7f in 168ms, sequenceid=2704, compaction requested=false 2023-07-21 13:20:23,816 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:23,888 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/e7cb1064c9fb46aa870bb6eef80dff11 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e7cb1064c9fb46aa870bb6eef80dff11 2023-07-21 13:20:23,913 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into e7cb1064c9fb46aa870bb6eef80dff11(size=5.0 K), total size for store is 14.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:23,913 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:23,913 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945623208; duration=0sec 2023-07-21 13:20:23,913 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:23,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:23,958 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:23,999 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver 2023-07-21 13:20:23,999 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver Metrics about HBase RegionObservers 2023-07-21 13:20:24,000 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 13:20:24,000 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-21 13:20:24,002 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-21 13:20:24,003 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-21 13:20:24,062 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=3003 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/b4a207a79cbe4d5196d6931d26a4a033 2023-07-21 13:20:24,088 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/b4a207a79cbe4d5196d6931d26a4a033 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b4a207a79cbe4d5196d6931d26a4a033 2023-07-21 13:20:24,105 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b4a207a79cbe4d5196d6931d26a4a033, entries=2, sequenceid=3003, filesize=4.8 K 2023-07-21 13:20:24,115 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=6.54 KB/6696 for fb4efbfaac030a4093735b39d65fef7f in 153ms, sequenceid=3003, compaction requested=true 2023-07-21 13:20:24,118 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:24,118 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:24,118 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 4 store files, 0 compacting, 4 eligible, 16 blocking 2023-07-21 13:20:24,122 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 4 files of size 19870 starting at candidate #0 after considering 3 permutations with 3 in ratio 2023-07-21 13:20:24,123 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:24,123 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:24,123 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e7cb1064c9fb46aa870bb6eef80dff11, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/bee41ff7fa614b239582b6473b3bb4ec, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dfde4798c32947fda314a53e92572118, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b4a207a79cbe4d5196d6931d26a4a033] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=19.4 K 2023-07-21 13:20:24,124 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting e7cb1064c9fb46aa870bb6eef80dff11, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=2110, earliestPutTs=1730504314969088 2023-07-21 13:20:24,127 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting bee41ff7fa614b239582b6473b3bb4ec, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2406, earliestPutTs=1730504317957121 2023-07-21 13:20:24,129 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting dfde4798c32947fda314a53e92572118, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2704, earliestPutTs=1730504318285824 2023-07-21 13:20:24,132 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting b4a207a79cbe4d5196d6931d26a4a033, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3003, earliestPutTs=1730504318616576 2023-07-21 13:20:24,196 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#13 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:24,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:24,300 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:24,318 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/efa86ce2921b40d2a0c159caeab8e7bf as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/efa86ce2921b40d2a0c159caeab8e7bf 2023-07-21 13:20:24,349 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 4 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into efa86ce2921b40d2a0c159caeab8e7bf(size=5.2 K), total size for store is 5.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:24,349 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:24,349 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=12, startTime=1689945624118; duration=0sec 2023-07-21 13:20:24,349 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:24,432 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=3300 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/d3d5dfb7432f494ba86637731dc2f773 2023-07-21 13:20:24,448 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/d3d5dfb7432f494ba86637731dc2f773 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d3d5dfb7432f494ba86637731dc2f773 2023-07-21 13:20:24,461 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d3d5dfb7432f494ba86637731dc2f773, entries=2, sequenceid=3300, filesize=4.8 K 2023-07-21 13:20:24,462 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=9.77 KB/10008 for fb4efbfaac030a4093735b39d65fef7f in 163ms, sequenceid=3300, compaction requested=false 2023-07-21 13:20:24,462 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:24,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:24,591 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:24,716 WARN [DataStreamer for file /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/a1ea101ff83440b2b2bbdb89b402f29b] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-07-21 13:20:24,717 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=3598 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/a1ea101ff83440b2b2bbdb89b402f29b 2023-07-21 13:20:24,732 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/a1ea101ff83440b2b2bbdb89b402f29b as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a1ea101ff83440b2b2bbdb89b402f29b 2023-07-21 13:20:24,764 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a1ea101ff83440b2b2bbdb89b402f29b, entries=2, sequenceid=3598, filesize=4.8 K 2023-07-21 13:20:24,765 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=17.72 KB/18144 for fb4efbfaac030a4093735b39d65fef7f in 173ms, sequenceid=3598, compaction requested=true 2023-07-21 13:20:24,765 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:24,765 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-21 13:20:24,765 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:24,769 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15100 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:24,769 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:24,769 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:24,770 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/efa86ce2921b40d2a0c159caeab8e7bf, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d3d5dfb7432f494ba86637731dc2f773, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a1ea101ff83440b2b2bbdb89b402f29b] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=14.7 K 2023-07-21 13:20:24,771 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting efa86ce2921b40d2a0c159caeab8e7bf, keycount=2, bloomtype=ROW, size=5.2 K, encoding=NONE, compression=NONE, seqNum=3003, earliestPutTs=1730504314969088 2023-07-21 13:20:24,772 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting d3d5dfb7432f494ba86637731dc2f773, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3300, earliestPutTs=1730504318932993 2023-07-21 13:20:24,774 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting a1ea101ff83440b2b2bbdb89b402f29b, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3598, earliestPutTs=1730504319284224 2023-07-21 13:20:24,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:24,797 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:24,867 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#17 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:24,929 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.66 KB at sequenceid=3909 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8 2023-07-21 13:20:24,960 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8 2023-07-21 13:20:24,978 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/8ffabce69082428cb41c0ec323104c64 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/8ffabce69082428cb41c0ec323104c64 2023-07-21 13:20:24,986 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8, entries=2, sequenceid=3909, filesize=4.8 K 2023-07-21 13:20:24,990 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.66 KB/22176, heapSize ~67.61 KB/69232, currentSize=8.23 KB/8424 for fb4efbfaac030a4093735b39d65fef7f in 194ms, sequenceid=3909, compaction requested=false 2023-07-21 13:20:24,995 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 8ffabce69082428cb41c0ec323104c64(size=5.3 K), total size for store is 10.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:24,995 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:24,995 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:24,995 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945624765; duration=0sec 2023-07-21 13:20:24,995 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:25,117 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:25,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:25,220 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=4208 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/d943f185764d4a4aac95b337b4e51f03 2023-07-21 13:20:25,248 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/d943f185764d4a4aac95b337b4e51f03 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d943f185764d4a4aac95b337b4e51f03 2023-07-21 13:20:25,261 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d943f185764d4a4aac95b337b4e51f03, entries=2, sequenceid=4208, filesize=4.8 K 2023-07-21 13:20:25,263 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=9.07 KB/9288 for fb4efbfaac030a4093735b39d65fef7f in 146ms, sequenceid=4208, compaction requested=true 2023-07-21 13:20:25,263 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:25,263 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-21 13:20:25,263 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:25,266 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15202 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:25,266 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:25,266 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:25,266 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/8ffabce69082428cb41c0ec323104c64, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d943f185764d4a4aac95b337b4e51f03] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=14.8 K 2023-07-21 13:20:25,267 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting 8ffabce69082428cb41c0ec323104c64, keycount=2, bloomtype=ROW, size=5.3 K, encoding=NONE, compression=NONE, seqNum=3598, earliestPutTs=1730504314969088 2023-07-21 13:20:25,268 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting fe2e0fbb980142e5ad2c6fd2a3a8ece8, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3909, earliestPutTs=1730504319582208 2023-07-21 13:20:25,269 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting d943f185764d4a4aac95b337b4e51f03, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4208, earliestPutTs=1730504319806464 2023-07-21 13:20:25,317 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#19 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:25,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:25,375 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:25,467 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/7760da9d233741828978c2e5a70167fa as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7760da9d233741828978c2e5a70167fa 2023-07-21 13:20:25,480 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 7760da9d233741828978c2e5a70167fa(size=5.4 K), total size for store is 5.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:25,480 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:25,480 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945625263; duration=0sec 2023-07-21 13:20:25,481 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:25,560 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.95 KB at sequenceid=4509 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/9cef2f83fb7a44bfa31bc0f65733cc20 2023-07-21 13:20:25,573 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/9cef2f83fb7a44bfa31bc0f65733cc20 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9cef2f83fb7a44bfa31bc0f65733cc20 2023-07-21 13:20:25,589 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9cef2f83fb7a44bfa31bc0f65733cc20, entries=2, sequenceid=4509, filesize=4.8 K 2023-07-21 13:20:25,590 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.95 KB/21456, heapSize ~65.42 KB/66992, currentSize=13.36 KB/13680 for fb4efbfaac030a4093735b39d65fef7f in 215ms, sequenceid=4509, compaction requested=false 2023-07-21 13:20:25,590 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:25,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:25,648 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:25,800 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.60 KB at sequenceid=4806 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/c84c1a4eae0349c59de3d4323dfdf7b9 2023-07-21 13:20:25,811 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/c84c1a4eae0349c59de3d4323dfdf7b9 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/c84c1a4eae0349c59de3d4323dfdf7b9 2023-07-21 13:20:25,835 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/c84c1a4eae0349c59de3d4323dfdf7b9, entries=2, sequenceid=4806, filesize=4.8 K 2023-07-21 13:20:25,838 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.60 KB/21096, heapSize ~64.33 KB/65872, currentSize=13.22 KB/13536 for fb4efbfaac030a4093735b39d65fef7f in 191ms, sequenceid=4806, compaction requested=true 2023-07-21 13:20:25,838 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:25,838 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-21 13:20:25,839 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:25,841 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15304 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:25,841 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:25,841 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:25,842 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7760da9d233741828978c2e5a70167fa, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9cef2f83fb7a44bfa31bc0f65733cc20, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/c84c1a4eae0349c59de3d4323dfdf7b9] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=14.9 K 2023-07-21 13:20:25,843 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 7760da9d233741828978c2e5a70167fa, keycount=2, bloomtype=ROW, size=5.4 K, encoding=NONE, compression=NONE, seqNum=4208, earliestPutTs=1730504314969088 2023-07-21 13:20:25,844 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 9cef2f83fb7a44bfa31bc0f65733cc20, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4509, earliestPutTs=1730504320119808 2023-07-21 13:20:25,845 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting c84c1a4eae0349c59de3d4323dfdf7b9, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4806, earliestPutTs=1730504320393216 2023-07-21 13:20:25,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:25,902 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:25,917 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#22 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:26,069 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=5104 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/dc51974fff404c2fae5ce073457d1320 2023-07-21 13:20:26,090 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/dc51974fff404c2fae5ce073457d1320 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dc51974fff404c2fae5ce073457d1320 2023-07-21 13:20:26,091 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/cddd36e1473c46d683f8844240be0b3c as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/cddd36e1473c46d683f8844240be0b3c 2023-07-21 13:20:26,104 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dc51974fff404c2fae5ce073457d1320, entries=2, sequenceid=5104, filesize=4.8 K 2023-07-21 13:20:26,107 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=17.30 KB/17712 for fb4efbfaac030a4093735b39d65fef7f in 205ms, sequenceid=5104, compaction requested=false 2023-07-21 13:20:26,108 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:26,109 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into cddd36e1473c46d683f8844240be0b3c(size=5.5 K), total size for store is 10.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:26,109 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:26,109 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945625838; duration=0sec 2023-07-21 13:20:26,110 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:26,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:26,135 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:26,231 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.60 KB at sequenceid=5401 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/139ca2e94b51424786a1d03ec9510cac 2023-07-21 13:20:26,250 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/139ca2e94b51424786a1d03ec9510cac as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/139ca2e94b51424786a1d03ec9510cac 2023-07-21 13:20:26,261 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/139ca2e94b51424786a1d03ec9510cac, entries=2, sequenceid=5401, filesize=4.8 K 2023-07-21 13:20:26,267 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.60 KB/21096, heapSize ~64.33 KB/65872, currentSize=13.71 KB/14040 for fb4efbfaac030a4093735b39d65fef7f in 131ms, sequenceid=5401, compaction requested=true 2023-07-21 13:20:26,267 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:26,267 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:26,267 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:26,270 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15406 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:26,270 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:26,270 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:26,270 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/cddd36e1473c46d683f8844240be0b3c, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dc51974fff404c2fae5ce073457d1320, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/139ca2e94b51424786a1d03ec9510cac] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=15.0 K 2023-07-21 13:20:26,271 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting cddd36e1473c46d683f8844240be0b3c, keycount=2, bloomtype=ROW, size=5.5 K, encoding=NONE, compression=NONE, seqNum=4806, earliestPutTs=1730504314969088 2023-07-21 13:20:26,273 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting dc51974fff404c2fae5ce073457d1320, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5104, earliestPutTs=1730504320663552 2023-07-21 13:20:26,274 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 139ca2e94b51424786a1d03ec9510cac, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5401, earliestPutTs=1730504320924672 2023-07-21 13:20:26,329 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#25 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:26,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:26,345 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:26,469 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/eda583307daf41458d4f1454e02faf92 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/eda583307daf41458d4f1454e02faf92 2023-07-21 13:20:26,489 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into eda583307daf41458d4f1454e02faf92(size=5.6 K), total size for store is 5.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:26,489 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:26,489 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945626267; duration=0sec 2023-07-21 13:20:26,490 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:26,941 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6520 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945686939, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,942 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6521 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945686939, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,942 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6522 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945686939, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,942 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6523 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945686942, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,943 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6524 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945686942, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,943 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6525 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945686942, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,944 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6526 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945686942, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,945 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6527 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945686945, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,951 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,951 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:26,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6529 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945686951, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6528 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945686951, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:26,980 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.53 KB at sequenceid=5696 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/b6251e10f595433691d0f05b40d6e069 2023-07-21 13:20:26,995 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/b6251e10f595433691d0f05b40d6e069 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b6251e10f595433691d0f05b40d6e069 2023-07-21 13:20:27,004 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b6251e10f595433691d0f05b40d6e069, entries=2, sequenceid=5696, filesize=4.8 K 2023-07-21 13:20:27,005 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.53 KB/21024, heapSize ~64.11 KB/65648, currentSize=61.66 KB/63144 for fb4efbfaac030a4093735b39d65fef7f in 660ms, sequenceid=5696, compaction requested=false 2023-07-21 13:20:27,006 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:27,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:27,051 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=61.73 KB heapSize=192.31 KB 2023-07-21 13:20:27,215 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,215 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6835 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687214, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,216 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6837 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687214, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,216 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6838 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687214, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,217 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6839 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687214, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,215 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6836 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687214, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,219 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6834 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687214, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,219 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,219 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6842 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687218, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6841 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687218, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6840 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687218, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,224 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6843 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687224, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,323 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,324 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6852 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687323, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,324 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6853 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687323, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,324 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6854 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687324, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,325 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6855 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687324, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,323 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6850 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687323, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6851 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687323, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,326 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,326 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6859 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687326, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6858 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687326, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,333 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6863 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687333, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,334 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6862 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687333, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,529 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6869 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687529, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,530 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6870 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687529, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,532 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,532 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] ipc.CallRunner(144): callId: 6874 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687531, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,532 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6872 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687531, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6871 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687531, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,534 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6877 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687534, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,535 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6878 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687534, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,536 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] ipc.CallRunner(144): callId: 6880 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687536, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,537 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6881 service: ClientService methodName: Mutate size: 198 connection: 188.40.62.62:57160 deadline: 1689945687536, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,537 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-21 13:20:27,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] ipc.CallRunner(144): callId: 6883 service: ClientService methodName: Mutate size: 199 connection: 188.40.62.62:57160 deadline: 1689945687537, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=fb4efbfaac030a4093735b39d65fef7f, server=jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:27,572 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=61.88 KB at sequenceid=6580 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/71a0db2a51184dbc9e6da966d2435874 2023-07-21 13:20:27,604 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/71a0db2a51184dbc9e6da966d2435874 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/71a0db2a51184dbc9e6da966d2435874 2023-07-21 13:20:27,614 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/71a0db2a51184dbc9e6da966d2435874, entries=2, sequenceid=6580, filesize=4.8 K 2023-07-21 13:20:27,623 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~61.88 KB/63360, heapSize ~192.73 KB/197360, currentSize=20.46 KB/20952 for fb4efbfaac030a4093735b39d65fef7f in 572ms, sequenceid=6580, compaction requested=true 2023-07-21 13:20:27,623 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:27,624 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:27,624 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:27,626 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15508 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:27,626 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:27,627 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:27,627 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/eda583307daf41458d4f1454e02faf92, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b6251e10f595433691d0f05b40d6e069, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/71a0db2a51184dbc9e6da966d2435874] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=15.1 K 2023-07-21 13:20:27,628 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting eda583307daf41458d4f1454e02faf92, keycount=2, bloomtype=ROW, size=5.6 K, encoding=NONE, compression=NONE, seqNum=5401, earliestPutTs=1730504314969088 2023-07-21 13:20:27,629 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting b6251e10f595433691d0f05b40d6e069, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5696, earliestPutTs=1730504321162240 2023-07-21 13:20:27,629 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 71a0db2a51184dbc9e6da966d2435874, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=6580, earliestPutTs=1730504321391616 2023-07-21 13:20:27,674 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#28 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:27,783 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/acbd5e9fb4d84e4f97525ab93d158302 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/acbd5e9fb4d84e4f97525ab93d158302 2023-07-21 13:20:27,798 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into acbd5e9fb4d84e4f97525ab93d158302(size=5.7 K), total size for store is 5.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:27,798 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:27,798 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945627624; duration=0sec 2023-07-21 13:20:27,798 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:27,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:27,845 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:27,951 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.53 KB at sequenceid=6876 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/86a0974c89e54a279e8c5c6b0ea229b7 2023-07-21 13:20:27,965 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/86a0974c89e54a279e8c5c6b0ea229b7 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/86a0974c89e54a279e8c5c6b0ea229b7 2023-07-21 13:20:27,975 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/86a0974c89e54a279e8c5c6b0ea229b7, entries=2, sequenceid=6876, filesize=4.8 K 2023-07-21 13:20:27,977 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.53 KB/21024, heapSize ~64.11 KB/65648, currentSize=13.43 KB/13752 for fb4efbfaac030a4093735b39d65fef7f in 132ms, sequenceid=6876, compaction requested=false 2023-07-21 13:20:27,977 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:28,025 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.67 KB heapSize=64.56 KB 2023-07-21 13:20:28,160 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=7174 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/ec59e00d608a42f58171564dd66c94e5 2023-07-21 13:20:28,170 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/ec59e00d608a42f58171564dd66c94e5 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ec59e00d608a42f58171564dd66c94e5 2023-07-21 13:20:28,180 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ec59e00d608a42f58171564dd66c94e5, entries=2, sequenceid=7174, filesize=4.8 K 2023-07-21 13:20:28,189 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=17.86 KB/18288 for fb4efbfaac030a4093735b39d65fef7f in 164ms, sequenceid=7174, compaction requested=true 2023-07-21 13:20:28,190 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,190 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-21 13:20:28,190 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:28,192 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15610 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:28,192 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:28,193 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:28,193 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/acbd5e9fb4d84e4f97525ab93d158302, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/86a0974c89e54a279e8c5c6b0ea229b7, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ec59e00d608a42f58171564dd66c94e5] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=15.2 K 2023-07-21 13:20:28,194 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting acbd5e9fb4d84e4f97525ab93d158302, keycount=2, bloomtype=ROW, size=5.7 K, encoding=NONE, compression=NONE, seqNum=6580, earliestPutTs=1730504314969088 2023-07-21 13:20:28,194 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting 86a0974c89e54a279e8c5c6b0ea229b7, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=6876, earliestPutTs=1730504322100225 2023-07-21 13:20:28,195 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting ec59e00d608a42f58171564dd66c94e5, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7174, earliestPutTs=1730504322914304 2023-07-21 13:20:28,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:28,206 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:28,251 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#32 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:28,383 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/99db378d19d543489b27f5a87f5d7132 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/99db378d19d543489b27f5a87f5d7132 2023-07-21 13:20:28,394 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=7472 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/b17f60c594b84f0489b24c7d615f6dc3 2023-07-21 13:20:28,394 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 99db378d19d543489b27f5a87f5d7132(size=5.8 K), total size for store is 5.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:28,394 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,394 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945628190; duration=0sec 2023-07-21 13:20:28,395 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:28,404 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/b17f60c594b84f0489b24c7d615f6dc3 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b17f60c594b84f0489b24c7d615f6dc3 2023-07-21 13:20:28,415 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b17f60c594b84f0489b24c7d615f6dc3, entries=2, sequenceid=7472, filesize=4.8 K 2023-07-21 13:20:28,417 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=25.38 KB/25992 for fb4efbfaac030a4093735b39d65fef7f in 211ms, sequenceid=7472, compaction requested=false 2023-07-21 13:20:28,417 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:28,417 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=25.45 KB heapSize=79.44 KB 2023-07-21 13:20:28,548 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=25.66 KB at sequenceid=7841 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/633d571cc1c64a989b1e08364b2d8383 2023-07-21 13:20:28,564 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/633d571cc1c64a989b1e08364b2d8383 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/633d571cc1c64a989b1e08364b2d8383 2023-07-21 13:20:28,587 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/633d571cc1c64a989b1e08364b2d8383, entries=2, sequenceid=7841, filesize=4.8 K 2023-07-21 13:20:28,588 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~25.66 KB/26280, heapSize ~80.08 KB/82000, currentSize=23.48 KB/24048 for fb4efbfaac030a4093735b39d65fef7f in 171ms, sequenceid=7841, compaction requested=true 2023-07-21 13:20:28,588 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,589 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:28,589 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:28,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:28,590 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=23.70 KB heapSize=73.97 KB 2023-07-21 13:20:28,591 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15712 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:28,591 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:28,591 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:28,591 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/99db378d19d543489b27f5a87f5d7132, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b17f60c594b84f0489b24c7d615f6dc3, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/633d571cc1c64a989b1e08364b2d8383] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=15.3 K 2023-07-21 13:20:28,592 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 99db378d19d543489b27f5a87f5d7132, keycount=2, bloomtype=ROW, size=5.8 K, encoding=NONE, compression=NONE, seqNum=7174, earliestPutTs=1730504314969088 2023-07-21 13:20:28,593 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting b17f60c594b84f0489b24c7d615f6dc3, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7472, earliestPutTs=1730504323098624 2023-07-21 13:20:28,593 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 633d571cc1c64a989b1e08364b2d8383, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7841, earliestPutTs=1730504323286016 2023-07-21 13:20:28,657 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#35 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:28,731 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=23.91 KB at sequenceid=8184 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/2bb0b746af1443ffabfc9755f446c6ed 2023-07-21 13:20:28,759 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/2bb0b746af1443ffabfc9755f446c6ed as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2bb0b746af1443ffabfc9755f446c6ed 2023-07-21 13:20:28,771 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2bb0b746af1443ffabfc9755f446c6ed, entries=2, sequenceid=8184, filesize=4.8 K 2023-07-21 13:20:28,774 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~23.91 KB/24480, heapSize ~74.61 KB/76400, currentSize=14.77 KB/15120 for fb4efbfaac030a4093735b39d65fef7f in 184ms, sequenceid=8184, compaction requested=false 2023-07-21 13:20:28,774 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,815 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/e88f6f1b7b9040d1af8141b69a9ea155 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e88f6f1b7b9040d1af8141b69a9ea155 2023-07-21 13:20:28,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:28,837 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:28,843 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into e88f6f1b7b9040d1af8141b69a9ea155(size=5.9 K), total size for store is 10.7 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:28,843 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,844 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945628589; duration=0sec 2023-07-21 13:20:28,844 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:28,952 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.60 KB at sequenceid=8480 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/10bd1f3834a74c5dadc616dea8956bc9 2023-07-21 13:20:28,964 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/10bd1f3834a74c5dadc616dea8956bc9 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/10bd1f3834a74c5dadc616dea8956bc9 2023-07-21 13:20:28,974 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/10bd1f3834a74c5dadc616dea8956bc9, entries=2, sequenceid=8480, filesize=4.8 K 2023-07-21 13:20:28,976 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.60 KB/21096, heapSize ~64.33 KB/65872, currentSize=17.58 KB/18000 for fb4efbfaac030a4093735b39d65fef7f in 139ms, sequenceid=8480, compaction requested=true 2023-07-21 13:20:28,976 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:28,976 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:28,977 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:28,979 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15814 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:28,979 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:28,979 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:28,979 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e88f6f1b7b9040d1af8141b69a9ea155, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2bb0b746af1443ffabfc9755f446c6ed, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/10bd1f3834a74c5dadc616dea8956bc9] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=15.4 K 2023-07-21 13:20:28,980 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting e88f6f1b7b9040d1af8141b69a9ea155, keycount=2, bloomtype=ROW, size=5.9 K, encoding=NONE, compression=NONE, seqNum=7841, earliestPutTs=1730504314969088 2023-07-21 13:20:28,981 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 2bb0b746af1443ffabfc9755f446c6ed, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8184, earliestPutTs=1730504323500032 2023-07-21 13:20:28,982 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 10bd1f3834a74c5dadc616dea8956bc9, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8480, earliestPutTs=1730504323676160 2023-07-21 13:20:28,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:28,997 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:29,012 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#38 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:29,107 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.15 KB at sequenceid=8799 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/21171fc95ec6488cb987b2f4b88ff23f 2023-07-21 13:20:29,118 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/21171fc95ec6488cb987b2f4b88ff23f as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/21171fc95ec6488cb987b2f4b88ff23f 2023-07-21 13:20:29,147 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/21171fc95ec6488cb987b2f4b88ff23f, entries=2, sequenceid=8799, filesize=4.8 K 2023-07-21 13:20:29,150 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.15 KB/22680, heapSize ~69.14 KB/70800, currentSize=18.91 KB/19368 for fb4efbfaac030a4093735b39d65fef7f in 152ms, sequenceid=8799, compaction requested=false 2023-07-21 13:20:29,150 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:29,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:29,169 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:29,282 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=9097 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/a3148d6e2c394ec09084bc393fdf3466 2023-07-21 13:20:29,307 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/a3148d6e2c394ec09084bc393fdf3466 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a3148d6e2c394ec09084bc393fdf3466 2023-07-21 13:20:29,321 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a3148d6e2c394ec09084bc393fdf3466, entries=2, sequenceid=9097, filesize=4.8 K 2023-07-21 13:20:29,325 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=14.63 KB/14976 for fb4efbfaac030a4093735b39d65fef7f in 157ms, sequenceid=9097, compaction requested=false 2023-07-21 13:20:29,325 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:29,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:29,372 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-21 13:20:29,425 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=9395 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/14269a023a0c489f96ea37312c01d58e 2023-07-21 13:20:29,437 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/14269a023a0c489f96ea37312c01d58e as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/14269a023a0c489f96ea37312c01d58e 2023-07-21 13:20:29,447 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/14269a023a0c489f96ea37312c01d58e, entries=2, sequenceid=9395, filesize=4.8 K 2023-07-21 13:20:29,450 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=10.20 KB/10440 for fb4efbfaac030a4093735b39d65fef7f in 78ms, sequenceid=9395, compaction requested=true 2023-07-21 13:20:29,451 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:29,451 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 6 store files, 3 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:29,451 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:29,453 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14718 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:29,453 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction 2023-07-21 13:20:29,453 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:29,453 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/21171fc95ec6488cb987b2f4b88ff23f, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a3148d6e2c394ec09084bc393fdf3466, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/14269a023a0c489f96ea37312c01d58e] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=14.4 K 2023-07-21 13:20:29,453 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting 21171fc95ec6488cb987b2f4b88ff23f, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8799 2023-07-21 13:20:29,453 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting a3148d6e2c394ec09084bc393fdf3466, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9097 2023-07-21 13:20:29,454 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] compactions.Compactor(207): Compacting 14269a023a0c489f96ea37312c01d58e, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9395 2023-07-21 13:20:29,472 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#41 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:29,510 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/dbe6874b655a4f26b437aae38bfee93d as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dbe6874b655a4f26b437aae38bfee93d 2023-07-21 13:20:29,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:29,516 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-21 13:20:29,527 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into dbe6874b655a4f26b437aae38bfee93d(size=4.8 K), total size for store is 20.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:29,527 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:29,527 INFO [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=10, startTime=1689945629451; duration=0sec 2023-07-21 13:20:29,529 DEBUG [RS:0;jenkins-hbase16:39771-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:29,559 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/512046ae0ad64a9a80a85adf6274e936 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/512046ae0ad64a9a80a85adf6274e936 2023-07-21 13:20:29,568 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 512046ae0ad64a9a80a85adf6274e936(size=6.0 K), total size for store is 10.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:29,569 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:29,569 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945628976; duration=0sec 2023-07-21 13:20:29,569 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:29,622 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=9694 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/a8856d1129974a3f87d9b20d8fd6dde2 2023-07-21 13:20:29,631 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/a8856d1129974a3f87d9b20d8fd6dde2 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a8856d1129974a3f87d9b20d8fd6dde2 2023-07-21 13:20:29,655 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a8856d1129974a3f87d9b20d8fd6dde2, entries=2, sequenceid=9694, filesize=4.8 K 2023-07-21 13:20:29,657 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=18.70 KB/19152 for fb4efbfaac030a4093735b39d65fef7f in 141ms, sequenceid=9694, compaction requested=true 2023-07-21 13:20:29,657 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:29,658 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:29,659 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:29,660 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15957 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-21 13:20:29,660 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating minor compaction (all files) 2023-07-21 13:20:29,660 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:29,661 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/512046ae0ad64a9a80a85adf6274e936, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dbe6874b655a4f26b437aae38bfee93d, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a8856d1129974a3f87d9b20d8fd6dde2] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=15.6 K 2023-07-21 13:20:29,661 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting 512046ae0ad64a9a80a85adf6274e936, keycount=2, bloomtype=ROW, size=6.0 K, encoding=NONE, compression=NONE, seqNum=8480, earliestPutTs=1730504314969088 2023-07-21 13:20:29,662 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting dbe6874b655a4f26b437aae38bfee93d, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9395, earliestPutTs=1730504323931136 2023-07-21 13:20:29,662 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] compactions.Compactor(207): Compacting a8856d1129974a3f87d9b20d8fd6dde2, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9694, earliestPutTs=1730504324476930 2023-07-21 13:20:29,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39771] regionserver.HRegion(9158): Flush requested on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:29,667 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=20.67 KB heapSize=64.56 KB 2023-07-21 13:20:29,706 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#44 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:29,767 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/0f60850dd3a94ae9816725b99e20e07c as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/0f60850dd3a94ae9816725b99e20e07c 2023-07-21 13:20:29,775 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 0f60850dd3a94ae9816725b99e20e07c(size=6.2 K), total size for store is 6.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:29,776 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:29,776 INFO [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f., storeName=fb4efbfaac030a4093735b39d65fef7f/cf, priority=13, startTime=1689945629658; duration=0sec 2023-07-21 13:20:29,776 DEBUG [RS:0;jenkins-hbase16:39771-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-21 13:20:29,795 DEBUG [Listener at localhost.localdomain/36547] regionserver.HRegion(2404): NOT flushing TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. as already flushing 2023-07-21 13:20:30,141 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.88 KB at sequenceid=9996 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/30775b1ae3724974aa2bc750feaab813 2023-07-21 13:20:30,149 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/30775b1ae3724974aa2bc750feaab813 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/30775b1ae3724974aa2bc750feaab813 2023-07-21 13:20:30,155 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/30775b1ae3724974aa2bc750feaab813, entries=2, sequenceid=9996, filesize=4.8 K 2023-07-21 13:20:30,156 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.88 KB/21384, heapSize ~65.20 KB/66768, currentSize=7.80 KB/7992 for fb4efbfaac030a4093735b39d65fef7f in 489ms, sequenceid=9996, compaction requested=false 2023-07-21 13:20:30,156 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:30,796 INFO [Listener at localhost.localdomain/36547] regionserver.HRegion(2745): Flushing fb4efbfaac030a4093735b39d65fef7f 1/1 column families, dataSize=7.80 KB heapSize=24.53 KB 2023-07-21 13:20:30,816 INFO [Listener at localhost.localdomain/36547] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=7.80 KB at sequenceid=10111 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/1aee5d493b2e4e418f02088879aa446d 2023-07-21 13:20:30,825 DEBUG [Listener at localhost.localdomain/36547] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/1aee5d493b2e4e418f02088879aa446d as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/1aee5d493b2e4e418f02088879aa446d 2023-07-21 13:20:30,830 INFO [Listener at localhost.localdomain/36547] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/1aee5d493b2e4e418f02088879aa446d, entries=2, sequenceid=10111, filesize=4.8 K 2023-07-21 13:20:30,831 INFO [Listener at localhost.localdomain/36547] regionserver.HRegion(2948): Finished flush of dataSize ~7.80 KB/7992, heapSize ~24.52 KB/25104, currentSize=0 B/0 for fb4efbfaac030a4093735b39d65fef7f in 35ms, sequenceid=10111, compaction requested=true 2023-07-21 13:20:30,831 DEBUG [Listener at localhost.localdomain/36547] regionserver.HRegion(2446): Flush status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:30,832 DEBUG [Listener at localhost.localdomain/36547] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-21 13:20:30,832 DEBUG [Listener at localhost.localdomain/36547] regionserver.HStore(1912): fb4efbfaac030a4093735b39d65fef7f/cf is initiating major compaction (all files) 2023-07-21 13:20:30,832 INFO [Listener at localhost.localdomain/36547] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-21 13:20:30,832 INFO [Listener at localhost.localdomain/36547] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-21 13:20:30,832 INFO [Listener at localhost.localdomain/36547] regionserver.HRegion(2259): Starting compaction of fb4efbfaac030a4093735b39d65fef7f/cf in TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:30,832 INFO [Listener at localhost.localdomain/36547] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/0f60850dd3a94ae9816725b99e20e07c, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/30775b1ae3724974aa2bc750feaab813, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/1aee5d493b2e4e418f02088879aa446d] into tmpdir=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp, totalSize=15.7 K 2023-07-21 13:20:30,833 DEBUG [Listener at localhost.localdomain/36547] compactions.Compactor(207): Compacting 0f60850dd3a94ae9816725b99e20e07c, keycount=2, bloomtype=ROW, size=6.2 K, encoding=NONE, compression=NONE, seqNum=9694, earliestPutTs=1730504314969088 2023-07-21 13:20:30,833 DEBUG [Listener at localhost.localdomain/36547] compactions.Compactor(207): Compacting 30775b1ae3724974aa2bc750feaab813, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9996, earliestPutTs=1730504324625408 2023-07-21 13:20:30,834 DEBUG [Listener at localhost.localdomain/36547] compactions.Compactor(207): Compacting 1aee5d493b2e4e418f02088879aa446d, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=10111, earliestPutTs=1730504324780032 2023-07-21 13:20:30,844 INFO [Listener at localhost.localdomain/36547] throttle.PressureAwareThroughputController(145): fb4efbfaac030a4093735b39d65fef7f#cf#compaction#46 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-21 13:20:30,859 DEBUG [Listener at localhost.localdomain/36547] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/.tmp/cf/5e896fff81ad48679a6bc4b34077a465 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/5e896fff81ad48679a6bc4b34077a465 2023-07-21 13:20:30,865 INFO [Listener at localhost.localdomain/36547] regionserver.HStore(1652): Completed major compaction of 3 (all) file(s) in fb4efbfaac030a4093735b39d65fef7f/cf of fb4efbfaac030a4093735b39d65fef7f into 5e896fff81ad48679a6bc4b34077a465(size=6.3 K), total size for store is 6.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-21 13:20:30,866 DEBUG [Listener at localhost.localdomain/36547] regionserver.HRegion(2289): Compaction status journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:30,888 INFO [Listener at localhost.localdomain/36547] hbase.ResourceChecker(175): after: coprocessor.example.TestWriteHeavyIncrementObserver#test Thread=438 (was 413) Potentially hanging thread: hconnection-0x64d55ac7-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-621537574_17 at /127.0.0.1:55836 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:55776 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:53278 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:53174 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:33458 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49686 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64d55ac7-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-621537574_17 at /127.0.0.1:53272 [Waiting for operation #14] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:59080 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49484 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:53138 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49500 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:53214 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49696 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:39768 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-621537574_17 at /127.0.0.1:55772 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase16:39771-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49556 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:53242 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:39710 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49684 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49672 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:33408 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49540 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:33460 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49560 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x64d55ac7-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-826638641_17 at /127.0.0.1:49642 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=870 (was 724) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=574 (was 426) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 175), AvailableMemoryMB=6159 (was 6606) 2023-07-21 13:20:30,889 INFO [Listener at localhost.localdomain/36547] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-21 13:20:30,889 INFO [Listener at localhost.localdomain/36547] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-21 13:20:30,890 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7ce7db44 to 127.0.0.1:61652 2023-07-21 13:20:30,890 DEBUG [Listener at localhost.localdomain/36547] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:30,891 DEBUG [Listener at localhost.localdomain/36547] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-21 13:20:30,891 DEBUG [Listener at localhost.localdomain/36547] util.JVMClusterUtil(257): Found active master hash=1710625563, stopped=false 2023-07-21 13:20:30,891 INFO [Listener at localhost.localdomain/36547] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:30,908 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:30,909 INFO [Listener at localhost.localdomain/36547] procedure2.ProcedureExecutor(629): Stopping 2023-07-21 13:20:30,909 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:30,910 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:30,908 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:30,908 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-21 13:20:30,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:30,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:30,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:30,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-21 13:20:30,910 DEBUG [Listener at localhost.localdomain/36547] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2c527739 to 127.0.0.1:61652 2023-07-21 13:20:30,910 DEBUG [Listener at localhost.localdomain/36547] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:30,911 INFO [Listener at localhost.localdomain/36547] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase16.apache.org,39771,1689945615330' ***** 2023-07-21 13:20:30,911 INFO [Listener at localhost.localdomain/36547] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 13:20:30,911 INFO [Listener at localhost.localdomain/36547] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase16.apache.org,37511,1689945615545' ***** 2023-07-21 13:20:30,911 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 13:20:30,911 INFO [Listener at localhost.localdomain/36547] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 13:20:30,911 INFO [Listener at localhost.localdomain/36547] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase16.apache.org,41329,1689945615760' ***** 2023-07-21 13:20:30,911 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 13:20:30,911 INFO [Listener at localhost.localdomain/36547] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-21 13:20:30,912 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 13:20:30,927 INFO [RS:2;jenkins-hbase16:41329] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@675ed0da{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 13:20:30,927 INFO [RS:1;jenkins-hbase16:37511] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@51fef4ca{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 13:20:30,927 INFO [RS:0;jenkins-hbase16:39771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@32f5c360{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-21 13:20:30,932 INFO [RS:0;jenkins-hbase16:39771] server.AbstractConnector(383): Stopped ServerConnector@202ec085{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 13:20:30,932 INFO [RS:1;jenkins-hbase16:37511] server.AbstractConnector(383): Stopped ServerConnector@70d3ed6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 13:20:30,932 INFO [RS:2;jenkins-hbase16:41329] server.AbstractConnector(383): Stopped ServerConnector@c8ff561{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 13:20:30,933 INFO [RS:1;jenkins-hbase16:37511] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 13:20:30,933 INFO [RS:0;jenkins-hbase16:39771] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 13:20:30,933 INFO [RS:2;jenkins-hbase16:41329] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 13:20:30,934 INFO [RS:1;jenkins-hbase16:37511] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@9b82400{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 13:20:30,935 INFO [RS:0;jenkins-hbase16:39771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@51b50322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 13:20:30,936 INFO [RS:1;jenkins-hbase16:37511] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@742fee4f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,STOPPED} 2023-07-21 13:20:30,935 INFO [RS:2;jenkins-hbase16:41329] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a52f866{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 13:20:30,935 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 13:20:30,937 INFO [RS:0;jenkins-hbase16:39771] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@58a6ace9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,STOPPED} 2023-07-21 13:20:30,938 INFO [RS:2;jenkins-hbase16:41329] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5cae4174{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,STOPPED} 2023-07-21 13:20:30,941 INFO [RS:1;jenkins-hbase16:37511] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 13:20:30,941 INFO [RS:0;jenkins-hbase16:39771] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 13:20:30,941 INFO [RS:2;jenkins-hbase16:41329] regionserver.HeapMemoryManager(220): Stopping 2023-07-21 13:20:30,941 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 13:20:30,941 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-21 13:20:30,941 INFO [RS:2;jenkins-hbase16:41329] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 13:20:30,941 INFO [RS:0;jenkins-hbase16:39771] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 13:20:30,941 INFO [RS:2;jenkins-hbase16:41329] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 13:20:30,941 INFO [RS:1;jenkins-hbase16:37511] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-21 13:20:30,941 INFO [RS:0;jenkins-hbase16:39771] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 13:20:30,942 INFO [RS:1;jenkins-hbase16:37511] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-21 13:20:30,942 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(3305): Received CLOSE for 6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:30,942 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:30,942 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(3305): Received CLOSE for fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:30,942 DEBUG [RS:1;jenkins-hbase16:37511] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x153e3e98 to 127.0.0.1:61652 2023-07-21 13:20:30,942 DEBUG [RS:1;jenkins-hbase16:37511] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:30,942 INFO [RS:1;jenkins-hbase16:37511] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 13:20:30,942 INFO [RS:1;jenkins-hbase16:37511] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 13:20:30,942 INFO [RS:1;jenkins-hbase16:37511] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 13:20:30,942 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-21 13:20:30,946 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:30,946 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:30,946 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 13:20:30,946 DEBUG [RS:2;jenkins-hbase16:41329] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a8a7e5a to 127.0.0.1:61652 2023-07-21 13:20:30,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-21 13:20:30,948 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-21 13:20:30,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-21 13:20:30,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing fb4efbfaac030a4093735b39d65fef7f, disabling compactions & flushes 2023-07-21 13:20:30,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1604): Closing 6e3bc236743db613771f2e95c424ea61, disabling compactions & flushes 2023-07-21 13:20:30,946 DEBUG [RS:0;jenkins-hbase16:39771] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f21d242 to 127.0.0.1:61652 2023-07-21 13:20:30,948 DEBUG [RS:0;jenkins-hbase16:39771] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:30,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:30,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1626): Closing region TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:30,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-21 13:20:30,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-21 13:20:30,948 DEBUG [RS:2;jenkins-hbase16:41329] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:30,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-21 13:20:30,949 DEBUG [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-21 13:20:30,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:30,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:30,948 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 13:20:30,949 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1478): Online Regions={fb4efbfaac030a4093735b39d65fef7f=TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.} 2023-07-21 13:20:30,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. after waiting 0 ms 2023-07-21 13:20:30,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1714): Acquired close lock on TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. after waiting 0 ms 2023-07-21 13:20:30,949 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.42 KB heapSize=4.93 KB 2023-07-21 13:20:30,949 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-21 13:20:30,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:30,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:30,949 DEBUG [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1504): Waiting on fb4efbfaac030a4093735b39d65fef7f 2023-07-21 13:20:30,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2745): Flushing 6e3bc236743db613771f2e95c424ea61 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-21 13:20:30,949 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1478): Online Regions={6e3bc236743db613771f2e95c424ea61=hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61.} 2023-07-21 13:20:30,950 DEBUG [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1504): Waiting on 6e3bc236743db613771f2e95c424ea61 2023-07-21 13:20:30,976 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 13:20:30,978 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 13:20:30,978 INFO [regionserver/jenkins-hbase16:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-21 13:20:30,985 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7cba18fd94164bb2a378395b4ce699fb, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9232723d51854d3290863c588c31958f, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/56824420e26a46bc9fbbafb5ab8a5412, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2f9236475c084531be6ad588a5cf35dd, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ae107388ed984168bdd65afbc6b15d27, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/31ca0432c3404a3a8d991ecfc854c344, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ebefd65f210b43769839bc9a6b75f310, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9babb9a93a7e48fdbf018cdfd7737e7a, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e7cb1064c9fb46aa870bb6eef80dff11, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/440161ac16fd4376b15f16a565b5bbf6, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/bee41ff7fa614b239582b6473b3bb4ec, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dfde4798c32947fda314a53e92572118, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/efa86ce2921b40d2a0c159caeab8e7bf, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b4a207a79cbe4d5196d6931d26a4a033, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d3d5dfb7432f494ba86637731dc2f773, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/8ffabce69082428cb41c0ec323104c64, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a1ea101ff83440b2b2bbdb89b402f29b, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7760da9d233741828978c2e5a70167fa, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d943f185764d4a4aac95b337b4e51f03, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9cef2f83fb7a44bfa31bc0f65733cc20, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/cddd36e1473c46d683f8844240be0b3c, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/c84c1a4eae0349c59de3d4323dfdf7b9, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dc51974fff404c2fae5ce073457d1320, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/eda583307daf41458d4f1454e02faf92, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/139ca2e94b51424786a1d03ec9510cac, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b6251e10f595433691d0f05b40d6e069, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/acbd5e9fb4d84e4f97525ab93d158302, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/71a0db2a51184dbc9e6da966d2435874, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/86a0974c89e54a279e8c5c6b0ea229b7, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/99db378d19d543489b27f5a87f5d7132, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ec59e00d608a42f58171564dd66c94e5, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b17f60c594b84f0489b24c7d615f6dc3, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e88f6f1b7b9040d1af8141b69a9ea155, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/633d571cc1c64a989b1e08364b2d8383, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2bb0b746af1443ffabfc9755f446c6ed, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/512046ae0ad64a9a80a85adf6274e936, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/10bd1f3834a74c5dadc616dea8956bc9, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/21171fc95ec6488cb987b2f4b88ff23f, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a3148d6e2c394ec09084bc393fdf3466, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dbe6874b655a4f26b437aae38bfee93d, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/14269a023a0c489f96ea37312c01d58e, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/0f60850dd3a94ae9816725b99e20e07c, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a8856d1129974a3f87d9b20d8fd6dde2, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/30775b1ae3724974aa2bc750feaab813, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/1aee5d493b2e4e418f02088879aa446d] to archive 2023-07-21 13:20:30,987 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-21 13:20:30,996 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7cba18fd94164bb2a378395b4ce699fb to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7cba18fd94164bb2a378395b4ce699fb 2023-07-21 13:20:30,997 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9232723d51854d3290863c588c31958f to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9232723d51854d3290863c588c31958f 2023-07-21 13:20:30,999 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/56824420e26a46bc9fbbafb5ab8a5412 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/56824420e26a46bc9fbbafb5ab8a5412 2023-07-21 13:20:31,001 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2f9236475c084531be6ad588a5cf35dd to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2f9236475c084531be6ad588a5cf35dd 2023-07-21 13:20:31,003 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ae107388ed984168bdd65afbc6b15d27 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ae107388ed984168bdd65afbc6b15d27 2023-07-21 13:20:31,005 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/31ca0432c3404a3a8d991ecfc854c344 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/31ca0432c3404a3a8d991ecfc854c344 2023-07-21 13:20:31,007 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ebefd65f210b43769839bc9a6b75f310 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ebefd65f210b43769839bc9a6b75f310 2023-07-21 13:20:31,009 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9babb9a93a7e48fdbf018cdfd7737e7a to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9babb9a93a7e48fdbf018cdfd7737e7a 2023-07-21 13:20:31,011 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e7cb1064c9fb46aa870bb6eef80dff11 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e7cb1064c9fb46aa870bb6eef80dff11 2023-07-21 13:20:31,013 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/440161ac16fd4376b15f16a565b5bbf6 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/440161ac16fd4376b15f16a565b5bbf6 2023-07-21 13:20:31,014 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/bee41ff7fa614b239582b6473b3bb4ec to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/bee41ff7fa614b239582b6473b3bb4ec 2023-07-21 13:20:31,016 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dfde4798c32947fda314a53e92572118 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dfde4798c32947fda314a53e92572118 2023-07-21 13:20:31,017 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/efa86ce2921b40d2a0c159caeab8e7bf to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/efa86ce2921b40d2a0c159caeab8e7bf 2023-07-21 13:20:31,019 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b4a207a79cbe4d5196d6931d26a4a033 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b4a207a79cbe4d5196d6931d26a4a033 2023-07-21 13:20:31,020 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d3d5dfb7432f494ba86637731dc2f773 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d3d5dfb7432f494ba86637731dc2f773 2023-07-21 13:20:31,022 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/8ffabce69082428cb41c0ec323104c64 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/8ffabce69082428cb41c0ec323104c64 2023-07-21 13:20:31,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.25 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/.tmp/info/c203d9b0744b426996bd20c8fe61db81 2023-07-21 13:20:31,023 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a1ea101ff83440b2b2bbdb89b402f29b to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a1ea101ff83440b2b2bbdb89b402f29b 2023-07-21 13:20:31,030 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/fe2e0fbb980142e5ad2c6fd2a3a8ece8 2023-07-21 13:20:31,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/.tmp/info/4f1dea87119a42c385a853af41cfa185 2023-07-21 13:20:31,033 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7760da9d233741828978c2e5a70167fa to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/7760da9d233741828978c2e5a70167fa 2023-07-21 13:20:31,037 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d943f185764d4a4aac95b337b4e51f03 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/d943f185764d4a4aac95b337b4e51f03 2023-07-21 13:20:31,039 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9cef2f83fb7a44bfa31bc0f65733cc20 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/9cef2f83fb7a44bfa31bc0f65733cc20 2023-07-21 13:20:31,041 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/cddd36e1473c46d683f8844240be0b3c to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/cddd36e1473c46d683f8844240be0b3c 2023-07-21 13:20:31,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/.tmp/info/4f1dea87119a42c385a853af41cfa185 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/info/4f1dea87119a42c385a853af41cfa185 2023-07-21 13:20:31,043 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/c84c1a4eae0349c59de3d4323dfdf7b9 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/c84c1a4eae0349c59de3d4323dfdf7b9 2023-07-21 13:20:31,045 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dc51974fff404c2fae5ce073457d1320 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dc51974fff404c2fae5ce073457d1320 2023-07-21 13:20:31,047 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/eda583307daf41458d4f1454e02faf92 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/eda583307daf41458d4f1454e02faf92 2023-07-21 13:20:31,048 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/139ca2e94b51424786a1d03ec9510cac to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/139ca2e94b51424786a1d03ec9510cac 2023-07-21 13:20:31,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/info/4f1dea87119a42c385a853af41cfa185, entries=2, sequenceid=6, filesize=4.8 K 2023-07-21 13:20:31,050 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 6e3bc236743db613771f2e95c424ea61 in 101ms, sequenceid=6, compaction requested=false 2023-07-21 13:20:31,052 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b6251e10f595433691d0f05b40d6e069 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b6251e10f595433691d0f05b40d6e069 2023-07-21 13:20:31,053 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/acbd5e9fb4d84e4f97525ab93d158302 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/acbd5e9fb4d84e4f97525ab93d158302 2023-07-21 13:20:31,055 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/71a0db2a51184dbc9e6da966d2435874 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/71a0db2a51184dbc9e6da966d2435874 2023-07-21 13:20:31,056 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/86a0974c89e54a279e8c5c6b0ea229b7 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/86a0974c89e54a279e8c5c6b0ea229b7 2023-07-21 13:20:31,057 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/99db378d19d543489b27f5a87f5d7132 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/99db378d19d543489b27f5a87f5d7132 2023-07-21 13:20:31,059 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ec59e00d608a42f58171564dd66c94e5 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/ec59e00d608a42f58171564dd66c94e5 2023-07-21 13:20:31,060 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b17f60c594b84f0489b24c7d615f6dc3 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/b17f60c594b84f0489b24c7d615f6dc3 2023-07-21 13:20:31,062 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e88f6f1b7b9040d1af8141b69a9ea155 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/e88f6f1b7b9040d1af8141b69a9ea155 2023-07-21 13:20:31,065 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/633d571cc1c64a989b1e08364b2d8383 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/633d571cc1c64a989b1e08364b2d8383 2023-07-21 13:20:31,068 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=170 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/.tmp/table/15b32c36c2a540249b5c87a2e65f0eb2 2023-07-21 13:20:31,068 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2bb0b746af1443ffabfc9755f446c6ed to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/2bb0b746af1443ffabfc9755f446c6ed 2023-07-21 13:20:31,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/namespace/6e3bc236743db613771f2e95c424ea61/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-21 13:20:31,077 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:31,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 6e3bc236743db613771f2e95c424ea61: 2023-07-21 13:20:31,077 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689945618365.6e3bc236743db613771f2e95c424ea61. 2023-07-21 13:20:31,078 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/512046ae0ad64a9a80a85adf6274e936 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/512046ae0ad64a9a80a85adf6274e936 2023-07-21 13:20:31,079 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/10bd1f3834a74c5dadc616dea8956bc9 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/10bd1f3834a74c5dadc616dea8956bc9 2023-07-21 13:20:31,081 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/21171fc95ec6488cb987b2f4b88ff23f to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/21171fc95ec6488cb987b2f4b88ff23f 2023-07-21 13:20:31,082 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/.tmp/info/c203d9b0744b426996bd20c8fe61db81 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/info/c203d9b0744b426996bd20c8fe61db81 2023-07-21 13:20:31,083 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a3148d6e2c394ec09084bc393fdf3466 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a3148d6e2c394ec09084bc393fdf3466 2023-07-21 13:20:31,084 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dbe6874b655a4f26b437aae38bfee93d to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/dbe6874b655a4f26b437aae38bfee93d 2023-07-21 13:20:31,086 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/14269a023a0c489f96ea37312c01d58e to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/14269a023a0c489f96ea37312c01d58e 2023-07-21 13:20:31,087 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/0f60850dd3a94ae9816725b99e20e07c to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/0f60850dd3a94ae9816725b99e20e07c 2023-07-21 13:20:31,088 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/info/c203d9b0744b426996bd20c8fe61db81, entries=20, sequenceid=14, filesize=6.9 K 2023-07-21 13:20:31,088 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a8856d1129974a3f87d9b20d8fd6dde2 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/a8856d1129974a3f87d9b20d8fd6dde2 2023-07-21 13:20:31,088 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/.tmp/table/15b32c36c2a540249b5c87a2e65f0eb2 as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/table/15b32c36c2a540249b5c87a2e65f0eb2 2023-07-21 13:20:31,089 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/30775b1ae3724974aa2bc750feaab813 to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/30775b1ae3724974aa2bc750feaab813 2023-07-21 13:20:31,091 DEBUG [StoreCloser-TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/1aee5d493b2e4e418f02088879aa446d to hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/archive/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/cf/1aee5d493b2e4e418f02088879aa446d 2023-07-21 13:20:31,094 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/table/15b32c36c2a540249b5c87a2e65f0eb2, entries=4, sequenceid=14, filesize=4.7 K 2023-07-21 13:20:31,095 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.42 KB/2473, heapSize ~4.65 KB/4760, currentSize=0 B/0 for 1588230740 in 146ms, sequenceid=14, compaction requested=false 2023-07-21 13:20:31,106 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-21 13:20:31,107 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-21 13:20:31,108 INFO [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-21 13:20:31,108 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-21 13:20:31,108 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-21 13:20:31,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/data/default/TestCP/fb4efbfaac030a4093735b39d65fef7f/recovered.edits/10115.seqid, newMaxSeqId=10115, maxSeqId=1 2023-07-21 13:20:31,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver 2023-07-21 13:20:31,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1838): Closed TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:31,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] regionserver.HRegion(1558): Region close journal for fb4efbfaac030a4093735b39d65fef7f: 2023-07-21 13:20:31,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase16:0-0] handler.CloseRegionHandler(117): Closed TestCP,,1689945619385.fb4efbfaac030a4093735b39d65fef7f. 2023-07-21 13:20:31,149 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,37511,1689945615545; all regions closed. 2023-07-21 13:20:31,149 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,39771,1689945615330; all regions closed. 2023-07-21 13:20:31,150 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,41329,1689945615760; all regions closed. 2023-07-21 13:20:31,160 DEBUG [RS:0;jenkins-hbase16:39771] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs 2023-07-21 13:20:31,160 DEBUG [RS:2;jenkins-hbase16:41329] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs 2023-07-21 13:20:31,160 INFO [RS:0;jenkins-hbase16:39771] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase16.apache.org%2C39771%2C1689945615330:(num 1689945617925) 2023-07-21 13:20:31,160 DEBUG [RS:1;jenkins-hbase16:37511] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs 2023-07-21 13:20:31,160 INFO [RS:2;jenkins-hbase16:41329] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase16.apache.org%2C41329%2C1689945615760:(num 1689945617929) 2023-07-21 13:20:31,160 INFO [RS:1;jenkins-hbase16:37511] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase16.apache.org%2C37511%2C1689945615545.meta:.meta(num 1689945618041) 2023-07-21 13:20:31,160 DEBUG [RS:0;jenkins-hbase16:39771] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:31,161 DEBUG [RS:2;jenkins-hbase16:41329] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:31,161 INFO [RS:0;jenkins-hbase16:39771] regionserver.LeaseManager(133): Closed leases 2023-07-21 13:20:31,161 INFO [RS:2;jenkins-hbase16:41329] regionserver.LeaseManager(133): Closed leases 2023-07-21 13:20:31,161 INFO [RS:2;jenkins-hbase16:41329] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 13:20:31,161 INFO [RS:0;jenkins-hbase16:39771] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-21 13:20:31,162 INFO [RS:0;jenkins-hbase16:39771] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 13:20:31,162 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 13:20:31,162 INFO [RS:0;jenkins-hbase16:39771] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 13:20:31,162 INFO [RS:0;jenkins-hbase16:39771] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 13:20:31,162 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 13:20:31,162 INFO [RS:2;jenkins-hbase16:41329] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-21 13:20:31,163 INFO [RS:0;jenkins-hbase16:39771] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:39771 2023-07-21 13:20:31,163 INFO [RS:2;jenkins-hbase16:41329] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-21 13:20:31,164 INFO [RS:2;jenkins-hbase16:41329] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-21 13:20:31,164 INFO [RS:2;jenkins-hbase16:41329] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:41329 2023-07-21 13:20:31,170 DEBUG [RS:1;jenkins-hbase16:37511] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/oldWALs 2023-07-21 13:20:31,170 INFO [RS:1;jenkins-hbase16:37511] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase16.apache.org%2C37511%2C1689945615545:(num 1689945617919) 2023-07-21 13:20:31,170 DEBUG [RS:1;jenkins-hbase16:37511] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:31,170 INFO [RS:1;jenkins-hbase16:37511] regionserver.LeaseManager(133): Closed leases 2023-07-21 13:20:31,170 INFO [RS:1;jenkins-hbase16:37511] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase16:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-21 13:20:31,170 INFO [regionserver/jenkins-hbase16:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 13:20:31,171 INFO [RS:1;jenkins-hbase16:37511] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:37511 2023-07-21 13:20:31,175 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:31,175 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:31,175 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 13:20:31,175 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,41329,1689945615760 2023-07-21 13:20:31,175 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 13:20:31,175 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 13:20:31,175 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-21 13:20:31,184 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:31,184 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:31,184 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:31,184 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,39771,1689945615330 2023-07-21 13:20:31,185 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:31,185 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase16.apache.org,37511,1689945615545 2023-07-21 13:20:31,192 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,39771,1689945615330] 2023-07-21 13:20:31,192 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,39771,1689945615330; numProcessing=1 2023-07-21 13:20:31,208 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,39771,1689945615330 already deleted, retry=false 2023-07-21 13:20:31,208 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,39771,1689945615330 expired; onlineServers=2 2023-07-21 13:20:31,208 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,37511,1689945615545] 2023-07-21 13:20:31,208 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,37511,1689945615545; numProcessing=2 2023-07-21 13:20:31,217 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,37511,1689945615545 already deleted, retry=false 2023-07-21 13:20:31,217 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,37511,1689945615545 expired; onlineServers=1 2023-07-21 13:20:31,217 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase16.apache.org,41329,1689945615760] 2023-07-21 13:20:31,217 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase16.apache.org,41329,1689945615760; numProcessing=3 2023-07-21 13:20:31,225 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase16.apache.org,41329,1689945615760 already deleted, retry=false 2023-07-21 13:20:31,225 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase16.apache.org,41329,1689945615760 expired; onlineServers=0 2023-07-21 13:20:31,225 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase16.apache.org,40019,1689945613483' ***** 2023-07-21 13:20:31,225 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-21 13:20:31,226 DEBUG [M:0;jenkins-hbase16:40019] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24cb1920, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase16.apache.org/188.40.62.62:0 2023-07-21 13:20:31,226 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-21 13:20:31,231 INFO [M:0;jenkins-hbase16:40019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5e70675{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-21 13:20:31,232 INFO [M:0;jenkins-hbase16:40019] server.AbstractConnector(383): Stopped ServerConnector@70928ca1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 13:20:31,232 INFO [M:0;jenkins-hbase16:40019] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-21 13:20:31,233 INFO [M:0;jenkins-hbase16:40019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@d35e35c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-21 13:20:31,234 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-21 13:20:31,234 INFO [M:0;jenkins-hbase16:40019] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@55824320{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/hadoop.log.dir/,STOPPED} 2023-07-21 13:20:31,234 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-21 13:20:31,235 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegionServer(1144): stopping server jenkins-hbase16.apache.org,40019,1689945613483 2023-07-21 13:20:31,235 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegionServer(1170): stopping server jenkins-hbase16.apache.org,40019,1689945613483; all regions closed. 2023-07-21 13:20:31,235 DEBUG [M:0;jenkins-hbase16:40019] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-21 13:20:31,235 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-21 13:20:31,235 INFO [M:0;jenkins-hbase16:40019] master.HMaster(1491): Stopping master jetty server 2023-07-21 13:20:31,237 INFO [M:0;jenkins-hbase16:40019] server.AbstractConnector(383): Stopped ServerConnector@2cb0d232{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-21 13:20:31,237 DEBUG [M:0;jenkins-hbase16:40019] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-21 13:20:31,237 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-21 13:20:31,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1689945617398] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.large.0-1689945617398,5,FailOnTimeoutGroup] 2023-07-21 13:20:31,237 DEBUG [master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1689945617407] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase16:0:becomeActiveMaster-HFileCleaner.small.0-1689945617407,5,FailOnTimeoutGroup] 2023-07-21 13:20:31,237 DEBUG [M:0;jenkins-hbase16:40019] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-21 13:20:31,238 INFO [M:0;jenkins-hbase16:40019] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-21 13:20:31,238 INFO [M:0;jenkins-hbase16:40019] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-21 13:20:31,238 INFO [M:0;jenkins-hbase16:40019] hbase.ChoreService(369): Chore service for: master/jenkins-hbase16:0 had [] on shutdown 2023-07-21 13:20:31,238 DEBUG [M:0;jenkins-hbase16:40019] master.HMaster(1512): Stopping service threads 2023-07-21 13:20:31,238 INFO [M:0;jenkins-hbase16:40019] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-21 13:20:31,238 ERROR [M:0;jenkins-hbase16:40019] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] 2023-07-21 13:20:31,239 INFO [M:0;jenkins-hbase16:40019] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-21 13:20:31,239 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-21 13:20:31,240 DEBUG [M:0;jenkins-hbase16:40019] zookeeper.ZKUtil(398): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-21 13:20:31,240 WARN [M:0;jenkins-hbase16:40019] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-21 13:20:31,240 INFO [M:0;jenkins-hbase16:40019] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-21 13:20:31,240 INFO [M:0;jenkins-hbase16:40019] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-21 13:20:31,241 DEBUG [M:0;jenkins-hbase16:40019] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-21 13:20:31,241 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:31,241 DEBUG [M:0;jenkins-hbase16:40019] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:31,241 DEBUG [M:0;jenkins-hbase16:40019] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-21 13:20:31,241 DEBUG [M:0;jenkins-hbase16:40019] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:31,241 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=37.97 KB heapSize=45.63 KB 2023-07-21 13:20:31,257 INFO [M:0;jenkins-hbase16:40019] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.97 KB at sequenceid=91 (bloomFilter=true), to=hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d0f15a8b06934471a854bf08c622ce2d 2023-07-21 13:20:31,262 DEBUG [M:0;jenkins-hbase16:40019] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d0f15a8b06934471a854bf08c622ce2d as hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d0f15a8b06934471a854bf08c622ce2d 2023-07-21 13:20:31,268 INFO [M:0;jenkins-hbase16:40019] regionserver.HStore(1080): Added hdfs://localhost.localdomain:43421/user/jenkins/test-data/fcb694ca-0839-a9b2-018f-0aeda16f78ff/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d0f15a8b06934471a854bf08c622ce2d, entries=11, sequenceid=91, filesize=7.1 K 2023-07-21 13:20:31,269 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegion(2948): Finished flush of dataSize ~37.97 KB/38882, heapSize ~45.61 KB/46704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=91, compaction requested=false 2023-07-21 13:20:31,270 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-21 13:20:31,270 DEBUG [M:0;jenkins-hbase16:40019] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-21 13:20:31,273 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-21 13:20:31,273 INFO [M:0;jenkins-hbase16:40019] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-21 13:20:31,274 INFO [M:0;jenkins-hbase16:40019] ipc.NettyRpcServer(158): Stopping server on /188.40.62.62:40019 2023-07-21 13:20:31,283 DEBUG [M:0;jenkins-hbase16:40019] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase16.apache.org,40019,1689945613483 already deleted, retry=false 2023-07-21 13:20:31,503 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,503 INFO [M:0;jenkins-hbase16:40019] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,40019,1689945613483; zookeeper connection closed. 2023-07-21 13:20:31,503 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): master:40019-0x1018809df7a0000, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,603 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,603 INFO [RS:1;jenkins-hbase16:37511] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,37511,1689945615545; zookeeper connection closed. 2023-07-21 13:20:31,603 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:37511-0x1018809df7a0002, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,604 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1b0ababf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1b0ababf 2023-07-21 13:20:31,704 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,704 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:39771-0x1018809df7a0001, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,704 INFO [RS:0;jenkins-hbase16:39771] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,39771,1689945615330; zookeeper connection closed. 2023-07-21 13:20:31,704 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@198ad9bb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@198ad9bb 2023-07-21 13:20:31,804 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,804 DEBUG [Listener at localhost.localdomain/36547-EventThread] zookeeper.ZKWatcher(600): regionserver:41329-0x1018809df7a0003, quorum=127.0.0.1:61652, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-21 13:20:31,804 INFO [RS:2;jenkins-hbase16:41329] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase16.apache.org,41329,1689945615760; zookeeper connection closed. 2023-07-21 13:20:31,804 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@55ad5535] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@55ad5535 2023-07-21 13:20:31,805 INFO [Listener at localhost.localdomain/36547] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-21 13:20:31,805 WARN [Listener at localhost.localdomain/36547] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 13:20:31,857 INFO [Listener at localhost.localdomain/36547] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 13:20:31,960 WARN [BP-2143160666-188.40.62.62-1689945608330 heartbeating to localhost.localdomain/127.0.0.1:43421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 13:20:31,960 WARN [BP-2143160666-188.40.62.62-1689945608330 heartbeating to localhost.localdomain/127.0.0.1:43421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2143160666-188.40.62.62-1689945608330 (Datanode Uuid 0a09a4fb-0bb2-423c-9922-8302db1fb4b9) service to localhost.localdomain/127.0.0.1:43421 2023-07-21 13:20:31,962 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/dfs/data/data5/current/BP-2143160666-188.40.62.62-1689945608330] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 13:20:31,962 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/dfs/data/data6/current/BP-2143160666-188.40.62.62-1689945608330] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 13:20:31,964 WARN [Listener at localhost.localdomain/36547] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 13:20:31,969 INFO [Listener at localhost.localdomain/36547] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 13:20:32,074 WARN [BP-2143160666-188.40.62.62-1689945608330 heartbeating to localhost.localdomain/127.0.0.1:43421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 13:20:32,074 WARN [BP-2143160666-188.40.62.62-1689945608330 heartbeating to localhost.localdomain/127.0.0.1:43421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2143160666-188.40.62.62-1689945608330 (Datanode Uuid c007261b-aac0-48ab-a6f4-89117298d36b) service to localhost.localdomain/127.0.0.1:43421 2023-07-21 13:20:32,075 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/dfs/data/data3/current/BP-2143160666-188.40.62.62-1689945608330] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 13:20:32,075 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/dfs/data/data4/current/BP-2143160666-188.40.62.62-1689945608330] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 13:20:32,077 WARN [Listener at localhost.localdomain/36547] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-21 13:20:32,084 INFO [Listener at localhost.localdomain/36547] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-21 13:20:32,188 WARN [BP-2143160666-188.40.62.62-1689945608330 heartbeating to localhost.localdomain/127.0.0.1:43421] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-21 13:20:32,188 WARN [BP-2143160666-188.40.62.62-1689945608330 heartbeating to localhost.localdomain/127.0.0.1:43421] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2143160666-188.40.62.62-1689945608330 (Datanode Uuid b5f0925d-ca2e-43cc-a0f1-405edcfdcfe2) service to localhost.localdomain/127.0.0.1:43421 2023-07-21 13:20:32,189 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/dfs/data/data1/current/BP-2143160666-188.40.62.62-1689945608330] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 13:20:32,189 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/e5d9fc39-6d34-4875-d01d-25cebd90870c/cluster_95b46e95-e414-0ee8-e75c-8409d7ae530b/dfs/data/data2/current/BP-2143160666-188.40.62.62-1689945608330] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-21 13:20:32,221 INFO [Listener at localhost.localdomain/36547] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-21 13:20:32,346 INFO [Listener at localhost.localdomain/36547] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-21 13:20:32,430 INFO [Listener at localhost.localdomain/36547] hbase.HBaseTestingUtility(1293): Minicluster is down