2023-07-28 11:01:57,831 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a 2023-07-28 11:01:57,845 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.coprocessor.example.TestWriteHeavyIncrementObserver timeout: 13 mins 2023-07-28 11:01:57,868 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-28 11:01:57,869 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae, deleteOnExit=true 2023-07-28 11:01:57,869 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-28 11:01:57,870 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/test.cache.data in system properties and HBase conf 2023-07-28 11:01:57,871 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.tmp.dir in system properties and HBase conf 2023-07-28 11:01:57,871 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir in system properties and HBase conf 2023-07-28 11:01:57,872 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-28 11:01:57,872 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-28 11:01:57,873 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-28 11:01:58,033 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-28 11:01:58,502 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-28 11:01:58,509 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-28 11:01:58,509 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-28 11:01:58,510 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-28 11:01:58,510 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-28 11:01:58,510 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-28 11:01:58,511 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-28 11:01:58,511 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-28 11:01:58,511 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-28 11:01:58,511 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-28 11:01:58,512 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/nfs.dump.dir in system properties and HBase conf 2023-07-28 11:01:58,512 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/java.io.tmpdir in system properties and HBase conf 2023-07-28 11:01:58,512 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-28 11:01:58,513 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-28 11:01:58,513 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-28 11:01:59,037 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-28 11:01:59,040 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-28 11:01:59,318 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-28 11:01:59,555 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-28 11:01:59,574 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-28 11:01:59,615 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-28 11:01:59,650 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/java.io.tmpdir/Jetty_localhost_localdomain_42707_hdfs____.bowijq/webapp 2023-07-28 11:01:59,799 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42707 2023-07-28 11:01:59,839 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-28 11:01:59,839 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-28 11:02:00,234 WARN [Listener at localhost.localdomain/39247] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-28 11:02:00,333 WARN [Listener at localhost.localdomain/39247] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-28 11:02:00,359 WARN [Listener at localhost.localdomain/39247] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-28 11:02:00,365 INFO [Listener at localhost.localdomain/39247] log.Slf4jLog(67): jetty-6.1.26 2023-07-28 11:02:00,373 INFO [Listener at localhost.localdomain/39247] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/java.io.tmpdir/Jetty_localhost_45775_datanode____.8ddkv9/webapp 2023-07-28 11:02:00,460 INFO [Listener at localhost.localdomain/39247] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45775 2023-07-28 11:02:00,891 WARN [Listener at localhost.localdomain/42179] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-28 11:02:00,910 WARN [Listener at localhost.localdomain/42179] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-28 11:02:00,915 WARN [Listener at localhost.localdomain/42179] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-28 11:02:00,917 INFO [Listener at localhost.localdomain/42179] log.Slf4jLog(67): jetty-6.1.26 2023-07-28 11:02:00,934 INFO [Listener at localhost.localdomain/42179] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/java.io.tmpdir/Jetty_localhost_45579_datanode____.6m48kj/webapp 2023-07-28 11:02:01,046 INFO [Listener at localhost.localdomain/42179] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45579 2023-07-28 11:02:01,059 WARN [Listener at localhost.localdomain/41711] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-28 11:02:01,079 WARN [Listener at localhost.localdomain/41711] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-28 11:02:01,084 WARN [Listener at localhost.localdomain/41711] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-28 11:02:01,086 INFO [Listener at localhost.localdomain/41711] log.Slf4jLog(67): jetty-6.1.26 2023-07-28 11:02:01,103 INFO [Listener at localhost.localdomain/41711] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/java.io.tmpdir/Jetty_localhost_33471_datanode____1rzrmd/webapp 2023-07-28 11:02:01,237 INFO [Listener at localhost.localdomain/41711] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33471 2023-07-28 11:02:01,264 WARN [Listener at localhost.localdomain/34871] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-28 11:02:01,521 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1934379cc591a342: Processing first storage report for DS-f2bcc1e9-583d-427b-9ca1-0a917239420b from datanode 5d11317d-e261-471a-bfc8-2bca902ce57e 2023-07-28 11:02:01,523 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1934379cc591a342: from storage DS-f2bcc1e9-583d-427b-9ca1-0a917239420b node DatanodeRegistration(127.0.0.1:37739, datanodeUuid=5d11317d-e261-471a-bfc8-2bca902ce57e, infoPort=44431, infoSecurePort=0, ipcPort=41711, storageInfo=lv=-57;cid=testClusterID;nsid=1554721401;c=1690542119108), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-28 11:02:01,523 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x33bc60f437c76826: Processing first storage report for DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775 from datanode 2a945093-d982-4545-9f5b-1973ef5d6e31 2023-07-28 11:02:01,523 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x33bc60f437c76826: from storage DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775 node DatanodeRegistration(127.0.0.1:33989, datanodeUuid=2a945093-d982-4545-9f5b-1973ef5d6e31, infoPort=42253, infoSecurePort=0, ipcPort=42179, storageInfo=lv=-57;cid=testClusterID;nsid=1554721401;c=1690542119108), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-28 11:02:01,523 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6d839216cb316404: Processing first storage report for DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1 from datanode 7da0c9f3-dcb5-4341-afdc-5aa62f3a2895 2023-07-28 11:02:01,523 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6d839216cb316404: from storage DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1 node DatanodeRegistration(127.0.0.1:33153, datanodeUuid=7da0c9f3-dcb5-4341-afdc-5aa62f3a2895, infoPort=41305, infoSecurePort=0, ipcPort=34871, storageInfo=lv=-57;cid=testClusterID;nsid=1554721401;c=1690542119108), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-28 11:02:01,524 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1934379cc591a342: Processing first storage report for DS-ea710bb3-99b9-48a5-95be-a571d60347f8 from datanode 5d11317d-e261-471a-bfc8-2bca902ce57e 2023-07-28 11:02:01,524 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1934379cc591a342: from storage DS-ea710bb3-99b9-48a5-95be-a571d60347f8 node DatanodeRegistration(127.0.0.1:37739, datanodeUuid=5d11317d-e261-471a-bfc8-2bca902ce57e, infoPort=44431, infoSecurePort=0, ipcPort=41711, storageInfo=lv=-57;cid=testClusterID;nsid=1554721401;c=1690542119108), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-28 11:02:01,524 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x33bc60f437c76826: Processing first storage report for DS-fc9fb5c9-82b1-47cf-84c6-c5091f34ff80 from datanode 2a945093-d982-4545-9f5b-1973ef5d6e31 2023-07-28 11:02:01,524 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x33bc60f437c76826: from storage DS-fc9fb5c9-82b1-47cf-84c6-c5091f34ff80 node DatanodeRegistration(127.0.0.1:33989, datanodeUuid=2a945093-d982-4545-9f5b-1973ef5d6e31, infoPort=42253, infoSecurePort=0, ipcPort=42179, storageInfo=lv=-57;cid=testClusterID;nsid=1554721401;c=1690542119108), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-28 11:02:01,524 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6d839216cb316404: Processing first storage report for DS-896bdf01-54ff-4aa4-a7d0-83c1f4db6d3d from datanode 7da0c9f3-dcb5-4341-afdc-5aa62f3a2895 2023-07-28 11:02:01,524 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6d839216cb316404: from storage DS-896bdf01-54ff-4aa4-a7d0-83c1f4db6d3d node DatanodeRegistration(127.0.0.1:33153, datanodeUuid=7da0c9f3-dcb5-4341-afdc-5aa62f3a2895, infoPort=41305, infoSecurePort=0, ipcPort=34871, storageInfo=lv=-57;cid=testClusterID;nsid=1554721401;c=1690542119108), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-28 11:02:01,691 DEBUG [Listener at localhost.localdomain/34871] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a 2023-07-28 11:02:01,801 INFO [Listener at localhost.localdomain/34871] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/zookeeper_0, clientPort=57744, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-28 11:02:01,822 INFO [Listener at localhost.localdomain/34871] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57744 2023-07-28 11:02:01,833 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:01,836 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:02,465 INFO [Listener at localhost.localdomain/34871] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b with version=8 2023-07-28 11:02:02,466 INFO [Listener at localhost.localdomain/34871] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/hbase-staging 2023-07-28 11:02:02,479 DEBUG [Listener at localhost.localdomain/34871] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-28 11:02:02,479 DEBUG [Listener at localhost.localdomain/34871] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-28 11:02:02,479 DEBUG [Listener at localhost.localdomain/34871] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-28 11:02:02,479 DEBUG [Listener at localhost.localdomain/34871] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-28 11:02:02,747 INFO [Listener at localhost.localdomain/34871] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-28 11:02:03,153 INFO [Listener at localhost.localdomain/34871] client.ConnectionUtils(127): master/jenkins-hbase17:0 server-side Connection retries=45 2023-07-28 11:02:03,181 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:03,182 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:03,182 INFO [Listener at localhost.localdomain/34871] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-28 11:02:03,182 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:03,182 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-28 11:02:03,314 INFO [Listener at localhost.localdomain/34871] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-28 11:02:03,377 DEBUG [Listener at localhost.localdomain/34871] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-28 11:02:03,459 INFO [Listener at localhost.localdomain/34871] ipc.NettyRpcServer(120): Bind to /136.243.18.41:42003 2023-07-28 11:02:03,467 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:03,469 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:03,490 INFO [Listener at localhost.localdomain/34871] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42003 connecting to ZooKeeper ensemble=127.0.0.1:57744 2023-07-28 11:02:03,533 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:420030x0, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-28 11:02:03,536 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42003-0x101ab971e4e0000 connected 2023-07-28 11:02:03,566 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-28 11:02:03,567 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:03,570 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-28 11:02:03,580 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42003 2023-07-28 11:02:03,584 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42003 2023-07-28 11:02:03,585 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42003 2023-07-28 11:02:03,588 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42003 2023-07-28 11:02:03,589 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42003 2023-07-28 11:02:03,636 INFO [Listener at localhost.localdomain/34871] log.Log(170): Logging initialized @6531ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-28 11:02:03,802 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-28 11:02:03,803 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-28 11:02:03,803 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-28 11:02:03,805 INFO [Listener at localhost.localdomain/34871] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-28 11:02:03,805 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-28 11:02:03,805 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-28 11:02:03,808 INFO [Listener at localhost.localdomain/34871] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-28 11:02:03,897 INFO [Listener at localhost.localdomain/34871] http.HttpServer(1146): Jetty bound to port 40945 2023-07-28 11:02:03,901 INFO [Listener at localhost.localdomain/34871] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-28 11:02:03,938 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:03,942 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@479e369e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,AVAILABLE} 2023-07-28 11:02:03,942 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:03,942 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@373e4047{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-28 11:02:04,016 INFO [Listener at localhost.localdomain/34871] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-28 11:02:04,034 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-28 11:02:04,034 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-28 11:02:04,037 INFO [Listener at localhost.localdomain/34871] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-28 11:02:04,049 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,080 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@fd66f76{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-28 11:02:04,095 INFO [Listener at localhost.localdomain/34871] server.AbstractConnector(333): Started ServerConnector@6079f759{HTTP/1.1, (http/1.1)}{0.0.0.0:40945} 2023-07-28 11:02:04,095 INFO [Listener at localhost.localdomain/34871] server.Server(415): Started @6990ms 2023-07-28 11:02:04,099 INFO [Listener at localhost.localdomain/34871] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b, hbase.cluster.distributed=false 2023-07-28 11:02:04,170 INFO [Listener at localhost.localdomain/34871] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-28 11:02:04,171 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,171 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,171 INFO [Listener at localhost.localdomain/34871] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-28 11:02:04,172 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,172 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-28 11:02:04,177 INFO [Listener at localhost.localdomain/34871] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-28 11:02:04,180 INFO [Listener at localhost.localdomain/34871] ipc.NettyRpcServer(120): Bind to /136.243.18.41:38067 2023-07-28 11:02:04,182 INFO [Listener at localhost.localdomain/34871] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-28 11:02:04,189 DEBUG [Listener at localhost.localdomain/34871] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-28 11:02:04,190 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:04,192 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:04,194 INFO [Listener at localhost.localdomain/34871] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38067 connecting to ZooKeeper ensemble=127.0.0.1:57744 2023-07-28 11:02:04,197 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:380670x0, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-28 11:02:04,198 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38067-0x101ab971e4e0001 connected 2023-07-28 11:02:04,198 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-28 11:02:04,200 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:04,201 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-28 11:02:04,201 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38067 2023-07-28 11:02:04,201 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38067 2023-07-28 11:02:04,202 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38067 2023-07-28 11:02:04,202 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38067 2023-07-28 11:02:04,202 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38067 2023-07-28 11:02:04,204 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-28 11:02:04,205 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-28 11:02:04,205 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-28 11:02:04,206 INFO [Listener at localhost.localdomain/34871] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-28 11:02:04,206 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-28 11:02:04,206 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-28 11:02:04,207 INFO [Listener at localhost.localdomain/34871] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-28 11:02:04,208 INFO [Listener at localhost.localdomain/34871] http.HttpServer(1146): Jetty bound to port 36887 2023-07-28 11:02:04,208 INFO [Listener at localhost.localdomain/34871] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-28 11:02:04,211 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,211 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@62f48f23{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,AVAILABLE} 2023-07-28 11:02:04,212 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,212 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76bb0a18{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-28 11:02:04,224 INFO [Listener at localhost.localdomain/34871] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-28 11:02:04,226 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-28 11:02:04,226 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-28 11:02:04,226 INFO [Listener at localhost.localdomain/34871] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-28 11:02:04,227 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,231 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7f9afa35{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-28 11:02:04,232 INFO [Listener at localhost.localdomain/34871] server.AbstractConnector(333): Started ServerConnector@39815010{HTTP/1.1, (http/1.1)}{0.0.0.0:36887} 2023-07-28 11:02:04,232 INFO [Listener at localhost.localdomain/34871] server.Server(415): Started @7127ms 2023-07-28 11:02:04,243 INFO [Listener at localhost.localdomain/34871] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-28 11:02:04,243 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,243 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,244 INFO [Listener at localhost.localdomain/34871] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-28 11:02:04,244 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,244 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-28 11:02:04,244 INFO [Listener at localhost.localdomain/34871] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-28 11:02:04,245 INFO [Listener at localhost.localdomain/34871] ipc.NettyRpcServer(120): Bind to /136.243.18.41:34167 2023-07-28 11:02:04,246 INFO [Listener at localhost.localdomain/34871] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-28 11:02:04,247 DEBUG [Listener at localhost.localdomain/34871] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-28 11:02:04,248 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:04,249 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:04,251 INFO [Listener at localhost.localdomain/34871] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34167 connecting to ZooKeeper ensemble=127.0.0.1:57744 2023-07-28 11:02:04,254 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:341670x0, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-28 11:02:04,255 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34167-0x101ab971e4e0002 connected 2023-07-28 11:02:04,255 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-28 11:02:04,256 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:04,257 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-28 11:02:04,258 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34167 2023-07-28 11:02:04,258 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34167 2023-07-28 11:02:04,259 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34167 2023-07-28 11:02:04,260 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34167 2023-07-28 11:02:04,262 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34167 2023-07-28 11:02:04,264 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-28 11:02:04,264 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-28 11:02:04,265 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-28 11:02:04,265 INFO [Listener at localhost.localdomain/34871] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-28 11:02:04,265 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-28 11:02:04,266 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-28 11:02:04,266 INFO [Listener at localhost.localdomain/34871] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-28 11:02:04,267 INFO [Listener at localhost.localdomain/34871] http.HttpServer(1146): Jetty bound to port 44729 2023-07-28 11:02:04,267 INFO [Listener at localhost.localdomain/34871] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-28 11:02:04,271 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,272 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@d4b1c99{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,AVAILABLE} 2023-07-28 11:02:04,272 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,272 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@258d96c9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-28 11:02:04,282 INFO [Listener at localhost.localdomain/34871] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-28 11:02:04,282 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-28 11:02:04,283 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-28 11:02:04,283 INFO [Listener at localhost.localdomain/34871] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-28 11:02:04,285 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,286 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@ba70917{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-28 11:02:04,287 INFO [Listener at localhost.localdomain/34871] server.AbstractConnector(333): Started ServerConnector@71d03e0c{HTTP/1.1, (http/1.1)}{0.0.0.0:44729} 2023-07-28 11:02:04,287 INFO [Listener at localhost.localdomain/34871] server.Server(415): Started @7182ms 2023-07-28 11:02:04,300 INFO [Listener at localhost.localdomain/34871] client.ConnectionUtils(127): regionserver/jenkins-hbase17:0 server-side Connection retries=45 2023-07-28 11:02:04,300 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,300 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,300 INFO [Listener at localhost.localdomain/34871] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-28 11:02:04,300 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-28 11:02:04,300 INFO [Listener at localhost.localdomain/34871] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-28 11:02:04,300 INFO [Listener at localhost.localdomain/34871] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-28 11:02:04,305 INFO [Listener at localhost.localdomain/34871] ipc.NettyRpcServer(120): Bind to /136.243.18.41:46497 2023-07-28 11:02:04,306 INFO [Listener at localhost.localdomain/34871] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-28 11:02:04,308 DEBUG [Listener at localhost.localdomain/34871] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-28 11:02:04,309 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:04,311 INFO [Listener at localhost.localdomain/34871] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:04,313 INFO [Listener at localhost.localdomain/34871] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46497 connecting to ZooKeeper ensemble=127.0.0.1:57744 2023-07-28 11:02:04,317 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:464970x0, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-28 11:02:04,318 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:464970x0, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-28 11:02:04,319 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:464970x0, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:04,320 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ZKUtil(164): regionserver:464970x0, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-28 11:02:04,321 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46497-0x101ab971e4e0003 connected 2023-07-28 11:02:04,321 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46497 2023-07-28 11:02:04,321 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46497 2023-07-28 11:02:04,325 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46497 2023-07-28 11:02:04,326 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46497 2023-07-28 11:02:04,327 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46497 2023-07-28 11:02:04,330 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-28 11:02:04,330 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-28 11:02:04,330 INFO [Listener at localhost.localdomain/34871] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-28 11:02:04,330 INFO [Listener at localhost.localdomain/34871] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-28 11:02:04,331 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-28 11:02:04,331 INFO [Listener at localhost.localdomain/34871] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-28 11:02:04,331 INFO [Listener at localhost.localdomain/34871] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-28 11:02:04,332 INFO [Listener at localhost.localdomain/34871] http.HttpServer(1146): Jetty bound to port 41249 2023-07-28 11:02:04,332 INFO [Listener at localhost.localdomain/34871] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-28 11:02:04,338 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,338 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@56ae0893{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,AVAILABLE} 2023-07-28 11:02:04,339 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,339 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29c2a492{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-28 11:02:04,348 INFO [Listener at localhost.localdomain/34871] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-28 11:02:04,349 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-28 11:02:04,349 INFO [Listener at localhost.localdomain/34871] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-28 11:02:04,349 INFO [Listener at localhost.localdomain/34871] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-28 11:02:04,350 INFO [Listener at localhost.localdomain/34871] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-28 11:02:04,351 INFO [Listener at localhost.localdomain/34871] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@174d5a5f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-28 11:02:04,353 INFO [Listener at localhost.localdomain/34871] server.AbstractConnector(333): Started ServerConnector@3aa32b08{HTTP/1.1, (http/1.1)}{0.0.0.0:41249} 2023-07-28 11:02:04,353 INFO [Listener at localhost.localdomain/34871] server.Server(415): Started @7248ms 2023-07-28 11:02:04,360 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-28 11:02:04,393 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@496cf30a{HTTP/1.1, (http/1.1)}{0.0.0.0:43771} 2023-07-28 11:02:04,393 INFO [master/jenkins-hbase17:0:becomeActiveMaster] server.Server(415): Started @7288ms 2023-07-28 11:02:04,393 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:04,404 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-28 11:02:04,406 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:04,422 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-28 11:02:04,422 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-28 11:02:04,422 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-28 11:02:04,422 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:04,422 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-28 11:02:04,424 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-28 11:02:04,425 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase17.apache.org,42003,1690542122606 from backup master directory 2023-07-28 11:02:04,425 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-28 11:02:04,429 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:04,429 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-28 11:02:04,430 WARN [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-28 11:02:04,430 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:04,433 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-28 11:02:04,434 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-28 11:02:04,534 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/hbase.id with ID: 8326d3cf-c816-4730-acb6-87f11280e158 2023-07-28 11:02:04,576 INFO [master/jenkins-hbase17:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-28 11:02:04,593 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:04,643 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1e3870f6 to 127.0.0.1:57744 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-28 11:02:04,669 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d0c3dd2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-28 11:02:04,691 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-28 11:02:04,692 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-28 11:02:04,711 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-28 11:02:04,711 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-28 11:02:04,713 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-28 11:02:04,717 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-28 11:02:04,718 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-28 11:02:04,751 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store-tmp 2023-07-28 11:02:04,794 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:04,794 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-28 11:02:04,794 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:04,795 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:04,795 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-28 11:02:04,795 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:04,795 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:04,795 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-28 11:02:04,796 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/WALs/jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:04,825 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C42003%2C1690542122606, suffix=, logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/WALs/jenkins-hbase17.apache.org,42003,1690542122606, archiveDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/oldWALs, maxLogs=10 2023-07-28 11:02:04,887 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK] 2023-07-28 11:02:04,887 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK] 2023-07-28 11:02:04,887 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK] 2023-07-28 11:02:04,895 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-28 11:02:04,957 INFO [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/WALs/jenkins-hbase17.apache.org,42003,1690542122606/jenkins-hbase17.apache.org%2C42003%2C1690542122606.1690542124838 2023-07-28 11:02:04,957 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK], DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK], DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK]] 2023-07-28 11:02:04,958 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-28 11:02:04,958 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:04,961 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-28 11:02:04,963 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-28 11:02:05,023 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-28 11:02:05,029 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-28 11:02:05,067 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-28 11:02:05,082 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:05,087 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-28 11:02:05,089 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-28 11:02:05,108 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-28 11:02:05,113 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-28 11:02:05,114 INFO [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10078728640, jitterRate=-0.06134524941444397}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-28 11:02:05,114 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-28 11:02:05,115 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-28 11:02:05,143 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-28 11:02:05,143 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-28 11:02:05,147 INFO [master/jenkins-hbase17:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-28 11:02:05,149 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-28 11:02:05,195 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 45 msec 2023-07-28 11:02:05,196 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-28 11:02:05,233 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-28 11:02:05,239 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-28 11:02:05,267 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-28 11:02:05,273 INFO [master/jenkins-hbase17:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-28 11:02:05,275 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-28 11:02:05,282 INFO [master/jenkins-hbase17:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-28 11:02:05,286 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-28 11:02:05,289 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:05,290 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-28 11:02:05,291 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-28 11:02:05,307 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-28 11:02:05,312 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:05,312 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:05,312 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:05,312 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:05,312 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:05,316 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase17.apache.org,42003,1690542122606, sessionid=0x101ab971e4e0000, setting cluster-up flag (Was=false) 2023-07-28 11:02:05,335 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:05,340 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-28 11:02:05,341 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:05,352 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:05,355 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-28 11:02:05,356 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:05,359 WARN [master/jenkins-hbase17:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.hbase-snapshot/.tmp 2023-07-28 11:02:05,460 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(951): ClusterId : 8326d3cf-c816-4730-acb6-87f11280e158 2023-07-28 11:02:05,462 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(951): ClusterId : 8326d3cf-c816-4730-acb6-87f11280e158 2023-07-28 11:02:05,462 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(951): ClusterId : 8326d3cf-c816-4730-acb6-87f11280e158 2023-07-28 11:02:05,469 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-28 11:02:05,469 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-28 11:02:05,469 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-28 11:02:05,476 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-28 11:02:05,476 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-28 11:02:05,476 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-28 11:02:05,477 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-28 11:02:05,476 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-28 11:02:05,477 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-28 11:02:05,480 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-28 11:02:05,480 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-28 11:02:05,483 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-28 11:02:05,482 DEBUG [RS:0;jenkins-hbase17:38067] zookeeper.ReadOnlyZKClient(139): Connect 0x24f517be to 127.0.0.1:57744 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-28 11:02:05,486 DEBUG [RS:1;jenkins-hbase17:34167] zookeeper.ReadOnlyZKClient(139): Connect 0x387e7ad2 to 127.0.0.1:57744 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-28 11:02:05,486 DEBUG [RS:2;jenkins-hbase17:46497] zookeeper.ReadOnlyZKClient(139): Connect 0x5e1d50db to 127.0.0.1:57744 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-28 11:02:05,513 DEBUG [RS:0;jenkins-hbase17:38067] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a32fd50, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-28 11:02:05,514 DEBUG [RS:0;jenkins-hbase17:38067] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e0d7cce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-28 11:02:05,523 DEBUG [RS:1;jenkins-hbase17:34167] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36b993ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-28 11:02:05,523 DEBUG [RS:1;jenkins-hbase17:34167] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@328611a2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-28 11:02:05,524 DEBUG [RS:2;jenkins-hbase17:46497] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50daa568, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-28 11:02:05,525 DEBUG [RS:2;jenkins-hbase17:46497] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54e53bab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-28 11:02:05,526 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-28 11:02:05,539 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-28 11:02:05,539 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-28 11:02:05,539 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-28 11:02:05,540 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=5, maxPoolSize=5 2023-07-28 11:02:05,540 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase17:0, corePoolSize=10, maxPoolSize=10 2023-07-28 11:02:05,540 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,540 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-28 11:02:05,540 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,544 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase17:46497 2023-07-28 11:02:05,545 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase17:34167 2023-07-28 11:02:05,547 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase17:38067 2023-07-28 11:02:05,548 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690542155548 2023-07-28 11:02:05,550 INFO [RS:2;jenkins-hbase17:46497] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-28 11:02:05,551 INFO [RS:2;jenkins-hbase17:46497] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-28 11:02:05,550 INFO [RS:0;jenkins-hbase17:38067] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-28 11:02:05,551 INFO [RS:0;jenkins-hbase17:38067] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-28 11:02:05,551 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-28 11:02:05,550 INFO [RS:1;jenkins-hbase17:34167] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-28 11:02:05,551 INFO [RS:1;jenkins-hbase17:34167] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-28 11:02:05,551 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1022): About to register with Master. 2023-07-28 11:02:05,551 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1022): About to register with Master. 2023-07-28 11:02:05,551 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1022): About to register with Master. 2023-07-28 11:02:05,554 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,42003,1690542122606 with isa=jenkins-hbase17.apache.org/136.243.18.41:38067, startcode=1690542124169 2023-07-28 11:02:05,554 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,42003,1690542122606 with isa=jenkins-hbase17.apache.org/136.243.18.41:34167, startcode=1690542124242 2023-07-28 11:02:05,554 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase17.apache.org,42003,1690542122606 with isa=jenkins-hbase17.apache.org/136.243.18.41:46497, startcode=1690542124299 2023-07-28 11:02:05,556 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-28 11:02:05,556 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-28 11:02:05,556 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-28 11:02:05,561 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-28 11:02:05,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-28 11:02:05,564 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-28 11:02:05,565 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-28 11:02:05,565 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-28 11:02:05,566 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,568 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-28 11:02:05,570 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-28 11:02:05,570 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-28 11:02:05,575 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-28 11:02:05,576 DEBUG [RS:2;jenkins-hbase17:46497] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-28 11:02:05,577 DEBUG [RS:0;jenkins-hbase17:38067] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-28 11:02:05,576 DEBUG [RS:1;jenkins-hbase17:34167] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-28 11:02:05,578 INFO [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-28 11:02:05,581 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1690542125580,5,FailOnTimeoutGroup] 2023-07-28 11:02:05,581 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1690542125581,5,FailOnTimeoutGroup] 2023-07-28 11:02:05,581 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,581 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-28 11:02:05,583 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,583 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,614 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-28 11:02:05,616 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-28 11:02:05,616 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b 2023-07-28 11:02:05,650 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:05,655 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-28 11:02:05,659 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/info 2023-07-28 11:02:05,660 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-28 11:02:05,661 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:05,661 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-28 11:02:05,662 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:38455, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-28 11:02:05,662 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:56341, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-28 11:02:05,662 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:47633, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-28 11:02:05,665 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/rep_barrier 2023-07-28 11:02:05,665 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-28 11:02:05,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:05,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-28 11:02:05,670 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/table 2023-07-28 11:02:05,670 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-28 11:02:05,673 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:05,676 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42003] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,677 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740 2023-07-28 11:02:05,678 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740 2023-07-28 11:02:05,682 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-07-28 11:02:05,684 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-28 11:02:05,696 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42003] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,697 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-28 11:02:05,697 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42003] master.ServerManager(394): Registering regionserver=jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,700 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9968602880, jitterRate=-0.0716015100479126}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-07-28 11:02:05,700 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-28 11:02:05,700 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-28 11:02:05,700 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-28 11:02:05,700 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-28 11:02:05,700 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-28 11:02:05,700 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-28 11:02:05,702 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-28 11:02:05,702 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b 2023-07-28 11:02:05,702 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-28 11:02:05,702 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39247 2023-07-28 11:02:05,702 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40945 2023-07-28 11:02:05,709 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-28 11:02:05,709 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-28 11:02:05,713 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b 2023-07-28 11:02:05,713 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39247 2023-07-28 11:02:05,713 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40945 2023-07-28 11:02:05,714 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-28 11:02:05,714 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b 2023-07-28 11:02:05,714 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:39247 2023-07-28 11:02:05,715 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40945 2023-07-28 11:02:05,721 DEBUG [RS:1;jenkins-hbase17:34167] zookeeper.ZKUtil(162): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,721 WARN [RS:1;jenkins-hbase17:34167] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-28 11:02:05,721 INFO [RS:1;jenkins-hbase17:34167] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-28 11:02:05,721 DEBUG [RS:0;jenkins-hbase17:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,721 WARN [RS:0;jenkins-hbase17:38067] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-28 11:02:05,722 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,723 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,34167,1690542124242] 2023-07-28 11:02:05,723 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,38067,1690542124169] 2023-07-28 11:02:05,723 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase17.apache.org,46497,1690542124299] 2023-07-28 11:02:05,725 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-28 11:02:05,727 DEBUG [RS:2;jenkins-hbase17:46497] zookeeper.ZKUtil(162): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,727 WARN [RS:2;jenkins-hbase17:46497] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-28 11:02:05,722 INFO [RS:0;jenkins-hbase17:38067] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-28 11:02:05,727 INFO [RS:2;jenkins-hbase17:46497] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-28 11:02:05,733 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,733 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,751 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-28 11:02:05,752 DEBUG [RS:1;jenkins-hbase17:34167] zookeeper.ZKUtil(162): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,752 DEBUG [RS:2;jenkins-hbase17:46497] zookeeper.ZKUtil(162): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,752 DEBUG [RS:0;jenkins-hbase17:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,753 DEBUG [RS:2;jenkins-hbase17:46497] zookeeper.ZKUtil(162): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,753 DEBUG [RS:1;jenkins-hbase17:34167] zookeeper.ZKUtil(162): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,753 DEBUG [RS:0;jenkins-hbase17:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,754 DEBUG [RS:2;jenkins-hbase17:46497] zookeeper.ZKUtil(162): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,754 DEBUG [RS:0;jenkins-hbase17:38067] zookeeper.ZKUtil(162): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,754 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-28 11:02:05,754 DEBUG [RS:1;jenkins-hbase17:34167] zookeeper.ZKUtil(162): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,764 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-28 11:02:05,764 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-28 11:02:05,764 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-28 11:02:05,774 INFO [RS:1;jenkins-hbase17:34167] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-28 11:02:05,775 INFO [RS:2;jenkins-hbase17:46497] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-28 11:02:05,774 INFO [RS:0;jenkins-hbase17:38067] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-28 11:02:05,824 INFO [RS:1;jenkins-hbase17:34167] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-28 11:02:05,824 INFO [RS:2;jenkins-hbase17:46497] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-28 11:02:05,824 INFO [RS:0;jenkins-hbase17:38067] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-28 11:02:05,830 INFO [RS:2;jenkins-hbase17:46497] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-28 11:02:05,830 INFO [RS:1;jenkins-hbase17:34167] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-28 11:02:05,830 INFO [RS:0;jenkins-hbase17:38067] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-28 11:02:05,831 INFO [RS:1;jenkins-hbase17:34167] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,831 INFO [RS:2;jenkins-hbase17:46497] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,832 INFO [RS:0;jenkins-hbase17:38067] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,833 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-28 11:02:05,834 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-28 11:02:05,834 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-28 11:02:05,843 INFO [RS:0;jenkins-hbase17:38067] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,843 INFO [RS:1;jenkins-hbase17:34167] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,843 INFO [RS:2;jenkins-hbase17:46497] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,843 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,843 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-28 11:02:05,844 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-28 11:02:05,845 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase17:0, corePoolSize=2, maxPoolSize=2 2023-07-28 11:02:05,845 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:2;jenkins-hbase17:46497] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,844 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,845 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,849 DEBUG [RS:1;jenkins-hbase17:34167] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,849 DEBUG [RS:0;jenkins-hbase17:38067] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase17:0, corePoolSize=1, maxPoolSize=1 2023-07-28 11:02:05,850 INFO [RS:2;jenkins-hbase17:46497] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,850 INFO [RS:1;jenkins-hbase17:34167] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,850 INFO [RS:2;jenkins-hbase17:46497] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,850 INFO [RS:1;jenkins-hbase17:34167] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,850 INFO [RS:2;jenkins-hbase17:46497] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,850 INFO [RS:1;jenkins-hbase17:34167] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,851 INFO [RS:0;jenkins-hbase17:38067] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,851 INFO [RS:0;jenkins-hbase17:38067] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,851 INFO [RS:0;jenkins-hbase17:38067] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,865 INFO [RS:1;jenkins-hbase17:34167] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-28 11:02:05,865 INFO [RS:0;jenkins-hbase17:38067] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-28 11:02:05,867 INFO [RS:2;jenkins-hbase17:46497] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-28 11:02:05,869 INFO [RS:1;jenkins-hbase17:34167] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,34167,1690542124242-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,869 INFO [RS:2;jenkins-hbase17:46497] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,46497,1690542124299-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,869 INFO [RS:0;jenkins-hbase17:38067] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,38067,1690542124169-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:05,889 INFO [RS:2;jenkins-hbase17:46497] regionserver.Replication(203): jenkins-hbase17.apache.org,46497,1690542124299 started 2023-07-28 11:02:05,889 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,46497,1690542124299, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:46497, sessionid=0x101ab971e4e0003 2023-07-28 11:02:05,889 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-28 11:02:05,889 DEBUG [RS:2;jenkins-hbase17:46497] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,889 DEBUG [RS:2;jenkins-hbase17:46497] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46497,1690542124299' 2023-07-28 11:02:05,890 DEBUG [RS:2;jenkins-hbase17:46497] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-28 11:02:05,890 DEBUG [RS:2;jenkins-hbase17:46497] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-28 11:02:05,891 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-28 11:02:05,891 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-28 11:02:05,891 DEBUG [RS:2;jenkins-hbase17:46497] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:05,891 DEBUG [RS:2;jenkins-hbase17:46497] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,46497,1690542124299' 2023-07-28 11:02:05,891 DEBUG [RS:2;jenkins-hbase17:46497] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-28 11:02:05,891 DEBUG [RS:2;jenkins-hbase17:46497] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-28 11:02:05,892 DEBUG [RS:2;jenkins-hbase17:46497] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-28 11:02:05,892 INFO [RS:2;jenkins-hbase17:46497] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-28 11:02:05,892 INFO [RS:2;jenkins-hbase17:46497] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-28 11:02:05,892 INFO [RS:1;jenkins-hbase17:34167] regionserver.Replication(203): jenkins-hbase17.apache.org,34167,1690542124242 started 2023-07-28 11:02:05,892 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,34167,1690542124242, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:34167, sessionid=0x101ab971e4e0002 2023-07-28 11:02:05,892 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-28 11:02:05,893 DEBUG [RS:1;jenkins-hbase17:34167] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,899 DEBUG [RS:1;jenkins-hbase17:34167] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34167,1690542124242' 2023-07-28 11:02:05,900 DEBUG [RS:1;jenkins-hbase17:34167] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-28 11:02:05,900 DEBUG [RS:1;jenkins-hbase17:34167] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-28 11:02:05,901 INFO [RS:0;jenkins-hbase17:38067] regionserver.Replication(203): jenkins-hbase17.apache.org,38067,1690542124169 started 2023-07-28 11:02:05,901 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1637): Serving as jenkins-hbase17.apache.org,38067,1690542124169, RpcServer on jenkins-hbase17.apache.org/136.243.18.41:38067, sessionid=0x101ab971e4e0001 2023-07-28 11:02:05,901 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-28 11:02:05,901 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-28 11:02:05,901 DEBUG [RS:0;jenkins-hbase17:38067] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,901 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-28 11:02:05,901 DEBUG [RS:0;jenkins-hbase17:38067] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38067,1690542124169' 2023-07-28 11:02:05,901 DEBUG [RS:1;jenkins-hbase17:34167] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:05,902 DEBUG [RS:0;jenkins-hbase17:38067] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-28 11:02:05,902 DEBUG [RS:1;jenkins-hbase17:34167] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,34167,1690542124242' 2023-07-28 11:02:05,902 DEBUG [RS:1;jenkins-hbase17:34167] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-28 11:02:05,902 DEBUG [RS:0;jenkins-hbase17:38067] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-28 11:02:05,902 DEBUG [RS:1;jenkins-hbase17:34167] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-28 11:02:05,903 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-28 11:02:05,903 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-28 11:02:05,903 DEBUG [RS:1;jenkins-hbase17:34167] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-28 11:02:05,903 DEBUG [RS:0;jenkins-hbase17:38067] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:05,907 DEBUG [jenkins-hbase17:42003] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-28 11:02:05,903 INFO [RS:1;jenkins-hbase17:34167] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-28 11:02:05,907 DEBUG [RS:0;jenkins-hbase17:38067] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase17.apache.org,38067,1690542124169' 2023-07-28 11:02:05,907 INFO [RS:1;jenkins-hbase17:34167] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-28 11:02:05,908 DEBUG [RS:0;jenkins-hbase17:38067] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-28 11:02:05,909 DEBUG [RS:0;jenkins-hbase17:38067] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-28 11:02:05,909 DEBUG [RS:0;jenkins-hbase17:38067] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-28 11:02:05,909 INFO [RS:0;jenkins-hbase17:38067] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-28 11:02:05,909 INFO [RS:0;jenkins-hbase17:38067] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-28 11:02:05,913 DEBUG [jenkins-hbase17:42003] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-28 11:02:05,920 DEBUG [jenkins-hbase17:42003] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-28 11:02:05,920 DEBUG [jenkins-hbase17:42003] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-28 11:02:05,920 DEBUG [jenkins-hbase17:42003] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-28 11:02:05,920 DEBUG [jenkins-hbase17:42003] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-28 11:02:05,923 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34167,1690542124242, state=OPENING 2023-07-28 11:02:05,929 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-28 11:02:05,931 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:05,931 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-28 11:02:05,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34167,1690542124242}] 2023-07-28 11:02:06,002 INFO [RS:2;jenkins-hbase17:46497] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C46497%2C1690542124299, suffix=, logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,46497,1690542124299, archiveDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs, maxLogs=32 2023-07-28 11:02:06,010 INFO [RS:1;jenkins-hbase17:34167] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34167%2C1690542124242, suffix=, logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,34167,1690542124242, archiveDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs, maxLogs=32 2023-07-28 11:02:06,011 INFO [RS:0;jenkins-hbase17:38067] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C38067%2C1690542124169, suffix=, logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,38067,1690542124169, archiveDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs, maxLogs=32 2023-07-28 11:02:06,029 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK] 2023-07-28 11:02:06,029 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK] 2023-07-28 11:02:06,029 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK] 2023-07-28 11:02:06,038 INFO [RS:2;jenkins-hbase17:46497] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,46497,1690542124299/jenkins-hbase17.apache.org%2C46497%2C1690542124299.1690542126005 2023-07-28 11:02:06,038 DEBUG [RS:2;jenkins-hbase17:46497] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK], DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK], DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK]] 2023-07-28 11:02:06,054 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK] 2023-07-28 11:02:06,055 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK] 2023-07-28 11:02:06,057 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK] 2023-07-28 11:02:06,059 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK] 2023-07-28 11:02:06,060 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK] 2023-07-28 11:02:06,061 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK] 2023-07-28 11:02:06,066 INFO [RS:1;jenkins-hbase17:34167] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,34167,1690542124242/jenkins-hbase17.apache.org%2C34167%2C1690542124242.1690542126012 2023-07-28 11:02:06,067 INFO [RS:0;jenkins-hbase17:38067] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,38067,1690542124169/jenkins-hbase17.apache.org%2C38067%2C1690542124169.1690542126013 2023-07-28 11:02:06,068 DEBUG [RS:1;jenkins-hbase17:34167] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK], DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK], DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK]] 2023-07-28 11:02:06,068 DEBUG [RS:0;jenkins-hbase17:38067] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK], DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK], DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK]] 2023-07-28 11:02:06,123 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:06,125 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-28 11:02:06,128 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45498, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-28 11:02:06,144 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-28 11:02:06,145 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-28 11:02:06,149 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase17.apache.org%2C34167%2C1690542124242.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,34167,1690542124242, archiveDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs, maxLogs=32 2023-07-28 11:02:06,169 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK] 2023-07-28 11:02:06,174 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK] 2023-07-28 11:02:06,176 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK] 2023-07-28 11:02:06,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/WALs/jenkins-hbase17.apache.org,34167,1690542124242/jenkins-hbase17.apache.org%2C34167%2C1690542124242.meta.1690542126151.meta 2023-07-28 11:02:06,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33153,DS-e7ee288f-f5db-43f2-8268-2e2173e5aaa1,DISK], DatanodeInfoWithStorage[127.0.0.1:37739,DS-f2bcc1e9-583d-427b-9ca1-0a917239420b,DISK], DatanodeInfoWithStorage[127.0.0.1:33989,DS-4da4325d-0cc7-45a0-afc9-d2f7687c0775,DISK]] 2023-07-28 11:02:06,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-28 11:02:06,192 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-28 11:02:06,209 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-28 11:02:06,212 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-28 11:02:06,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-28 11:02:06,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:06,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-28 11:02:06,218 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-28 11:02:06,221 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-28 11:02:06,225 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/info 2023-07-28 11:02:06,225 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/info 2023-07-28 11:02:06,225 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-28 11:02:06,226 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:06,227 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-28 11:02:06,228 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/rep_barrier 2023-07-28 11:02:06,228 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/rep_barrier 2023-07-28 11:02:06,229 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-28 11:02:06,230 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:06,231 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-28 11:02:06,232 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/table 2023-07-28 11:02:06,232 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/table 2023-07-28 11:02:06,233 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-28 11:02:06,233 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:06,235 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740 2023-07-28 11:02:06,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740 2023-07-28 11:02:06,241 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (16.0 M)) instead. 2023-07-28 11:02:06,244 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-28 11:02:06,245 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9510087520, jitterRate=-0.1143040806055069}}}, FlushLargeStoresPolicy{flushSizeLowerBound=16777216} 2023-07-28 11:02:06,245 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-28 11:02:06,259 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690542126113 2023-07-28 11:02:06,275 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-28 11:02:06,276 INFO [RS_OPEN_META-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-28 11:02:06,276 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase17.apache.org,34167,1690542124242, state=OPEN 2023-07-28 11:02:06,278 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-28 11:02:06,279 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-28 11:02:06,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-28 11:02:06,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase17.apache.org,34167,1690542124242 in 344 msec 2023-07-28 11:02:06,293 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-28 11:02:06,294 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 564 msec 2023-07-28 11:02:06,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 868 msec 2023-07-28 11:02:06,299 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690542126299, completionTime=-1 2023-07-28 11:02:06,299 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-28 11:02:06,299 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-28 11:02:06,354 DEBUG [hconnection-0x2594ed06-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-28 11:02:06,357 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45512, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-28 11:02:06,375 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-28 11:02:06,375 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690542186375 2023-07-28 11:02:06,375 INFO [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690542246375 2023-07-28 11:02:06,375 INFO [master/jenkins-hbase17:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 75 msec 2023-07-28 11:02:06,407 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,42003,1690542122606-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:06,407 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,42003,1690542122606-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:06,408 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,42003,1690542122606-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:06,409 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase17:42003, period=300000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:06,410 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:06,422 DEBUG [master/jenkins-hbase17:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-28 11:02:06,431 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-28 11:02:06,432 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-28 11:02:06,440 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-28 11:02:06,443 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-28 11:02:06,445 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-28 11:02:06,466 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,470 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289 empty. 2023-07-28 11:02:06,471 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,471 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-28 11:02:06,519 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-28 11:02:06,522 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ec21f6123aff02d562a4d2f2eafb4289, NAME => 'hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp 2023-07-28 11:02:06,545 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:06,545 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ec21f6123aff02d562a4d2f2eafb4289, disabling compactions & flushes 2023-07-28 11:02:06,545 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:06,545 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:06,545 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. after waiting 0 ms 2023-07-28 11:02:06,545 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:06,546 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:06,546 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ec21f6123aff02d562a4d2f2eafb4289: 2023-07-28 11:02:06,550 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-28 11:02:06,565 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690542126553"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690542126553"}]},"ts":"1690542126553"} 2023-07-28 11:02:06,598 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-28 11:02:06,600 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-28 11:02:06,605 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690542126601"}]},"ts":"1690542126601"} 2023-07-28 11:02:06,613 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-28 11:02:06,620 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-28 11:02:06,621 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-28 11:02:06,621 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-28 11:02:06,621 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-28 11:02:06,621 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-28 11:02:06,624 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ec21f6123aff02d562a4d2f2eafb4289, ASSIGN}] 2023-07-28 11:02:06,628 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ec21f6123aff02d562a4d2f2eafb4289, ASSIGN 2023-07-28 11:02:06,630 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ec21f6123aff02d562a4d2f2eafb4289, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,38067,1690542124169; forceNewPlan=false, retain=false 2023-07-28 11:02:06,783 INFO [jenkins-hbase17:42003] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-28 11:02:06,784 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ec21f6123aff02d562a4d2f2eafb4289, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:06,784 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690542126783"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1690542126783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690542126783"}]},"ts":"1690542126783"} 2023-07-28 11:02:06,788 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure ec21f6123aff02d562a4d2f2eafb4289, server=jenkins-hbase17.apache.org,38067,1690542124169}] 2023-07-28 11:02:06,943 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:06,943 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-28 11:02:06,948 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53168, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-28 11:02:06,953 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:06,955 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec21f6123aff02d562a4d2f2eafb4289, NAME => 'hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.', STARTKEY => '', ENDKEY => ''} 2023-07-28 11:02:06,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:06,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,956 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,959 INFO [StoreOpener-ec21f6123aff02d562a4d2f2eafb4289-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,961 DEBUG [StoreOpener-ec21f6123aff02d562a4d2f2eafb4289-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/info 2023-07-28 11:02:06,961 DEBUG [StoreOpener-ec21f6123aff02d562a4d2f2eafb4289-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/info 2023-07-28 11:02:06,962 INFO [StoreOpener-ec21f6123aff02d562a4d2f2eafb4289-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec21f6123aff02d562a4d2f2eafb4289 columnFamilyName info 2023-07-28 11:02:06,963 INFO [StoreOpener-ec21f6123aff02d562a4d2f2eafb4289-1] regionserver.HStore(310): Store=ec21f6123aff02d562a4d2f2eafb4289/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:06,964 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,965 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,969 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:06,974 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-28 11:02:06,976 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened ec21f6123aff02d562a4d2f2eafb4289; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10005034880, jitterRate=-0.06820851564407349}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-28 11:02:06,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for ec21f6123aff02d562a4d2f2eafb4289: 2023-07-28 11:02:06,980 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289., pid=6, masterSystemTime=1690542126943 2023-07-28 11:02:06,986 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:06,989 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:06,989 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ec21f6123aff02d562a4d2f2eafb4289, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:06,989 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690542126987"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1690542126987"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690542126987"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690542126987"}]},"ts":"1690542126987"} 2023-07-28 11:02:06,996 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-28 11:02:06,996 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure ec21f6123aff02d562a4d2f2eafb4289, server=jenkins-hbase17.apache.org,38067,1690542124169 in 204 msec 2023-07-28 11:02:07,000 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-28 11:02:07,000 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ec21f6123aff02d562a4d2f2eafb4289, ASSIGN in 372 msec 2023-07-28 11:02:07,002 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-28 11:02:07,002 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690542127002"}]},"ts":"1690542127002"} 2023-07-28 11:02:07,005 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-28 11:02:07,008 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-28 11:02:07,012 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 575 msec 2023-07-28 11:02:07,043 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-28 11:02:07,044 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-28 11:02:07,044 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:07,067 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-28 11:02:07,071 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:53178, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-28 11:02:07,087 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-28 11:02:07,110 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-28 11:02:07,118 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 39 msec 2023-07-28 11:02:07,121 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-28 11:02:07,135 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-28 11:02:07,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-07-28 11:02:07,146 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-28 11:02:07,148 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-28 11:02:07,150 INFO [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.720sec 2023-07-28 11:02:07,152 INFO [master/jenkins-hbase17:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-28 11:02:07,153 INFO [master/jenkins-hbase17:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-28 11:02:07,153 INFO [master/jenkins-hbase17:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-28 11:02:07,155 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,42003,1690542122606-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-28 11:02:07,155 INFO [master/jenkins-hbase17:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase17.apache.org,42003,1690542122606-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-28 11:02:07,162 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-28 11:02:07,166 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ReadOnlyZKClient(139): Connect 0x49aa5cb4 to 127.0.0.1:57744 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-28 11:02:07,170 DEBUG [Listener at localhost.localdomain/34871] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@143bb99, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-28 11:02:07,183 DEBUG [hconnection-0x7b95d891-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-28 11:02:07,198 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:45522, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-28 11:02:07,209 INFO [Listener at localhost.localdomain/34871] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:07,218 DEBUG [Listener at localhost.localdomain/34871] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-28 11:02:07,221 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:44328, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-28 11:02:07,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42003] master.HMaster$4(2112): Client=jenkins//136.243.18.41 create 'TestCP', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver|1073741823|'}}, {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-28 11:02:07,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42003] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=TestCP 2023-07-28 11:02:07,238 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_PRE_OPERATION 2023-07-28 11:02:07,241 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-28 11:02:07,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42003] master.MasterRpcServices(700): Client=jenkins//136.243.18.41 procedure request for creating table: namespace: "default" qualifier: "TestCP" procId is: 9 2023-07-28 11:02:07,244 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/default/TestCP/4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,245 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/default/TestCP/4b73e169ebfeed69423385de141758d8 empty. 2023-07-28 11:02:07,248 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/default/TestCP/4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,248 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived TestCP regions 2023-07-28 11:02:07,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42003] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-28 11:02:07,274 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp/data/default/TestCP/.tabledesc/.tableinfo.0000000001 2023-07-28 11:02:07,276 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4b73e169ebfeed69423385de141758d8, NAME => 'TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='TestCP', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver|1073741823|'}}, {NAME => 'cf', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/.tmp 2023-07-28 11:02:07,298 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(866): Instantiated TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:07,299 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1604): Closing 4b73e169ebfeed69423385de141758d8, disabling compactions & flushes 2023-07-28 11:02:07,299 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1626): Closing region TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:07,299 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:07,299 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1714): Acquired close lock on TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. after waiting 0 ms 2023-07-28 11:02:07,299 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1724): Updates disabled for region TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:07,299 INFO [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1838): Closed TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:07,299 DEBUG [RegionOpenAndInit-TestCP-pool-0] regionserver.HRegion(1558): Region close journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:07,302 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_ADD_TO_META 2023-07-28 11:02:07,304 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1690542127304"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690542127304"}]},"ts":"1690542127304"} 2023-07-28 11:02:07,307 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-28 11:02:07,308 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-28 11:02:07,308 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestCP","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690542127308"}]},"ts":"1690542127308"} 2023-07-28 11:02:07,310 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=TestCP, state=ENABLING in hbase:meta 2023-07-28 11:02:07,314 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase17.apache.org=0} racks are {/default-rack=0} 2023-07-28 11:02:07,315 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-28 11:02:07,315 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-28 11:02:07,315 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-28 11:02:07,315 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-28 11:02:07,315 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestCP, region=4b73e169ebfeed69423385de141758d8, ASSIGN}] 2023-07-28 11:02:07,318 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=TestCP, region=4b73e169ebfeed69423385de141758d8, ASSIGN 2023-07-28 11:02:07,319 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=TestCP, region=4b73e169ebfeed69423385de141758d8, ASSIGN; state=OFFLINE, location=jenkins-hbase17.apache.org,46497,1690542124299; forceNewPlan=false, retain=false 2023-07-28 11:02:07,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42003] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-28 11:02:07,470 INFO [jenkins-hbase17:42003] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-28 11:02:07,471 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4b73e169ebfeed69423385de141758d8, regionState=OPENING, regionLocation=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:07,471 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1690542127471"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1690542127471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690542127471"}]},"ts":"1690542127471"} 2023-07-28 11:02:07,475 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299}] 2023-07-28 11:02:07,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42003] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-28 11:02:07,631 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:07,631 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-28 11:02:07,635 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:50574, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-28 11:02:07,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(130): Open TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:07,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b73e169ebfeed69423385de141758d8, NAME => 'TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.', STARTKEY => '', ENDKEY => ''} 2023-07-28 11:02:07,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver with path null and priority 1073741823 2023-07-28 11:02:07,646 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver from HTD of TestCP successfully. 2023-07-28 11:02:07,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table TestCP 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(866): Instantiated TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-28 11:02:07,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7894): checking encryption for 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(7897): checking classloading for 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,650 INFO [StoreOpener-4b73e169ebfeed69423385de141758d8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,652 DEBUG [StoreOpener-4b73e169ebfeed69423385de141758d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf 2023-07-28 11:02:07,652 DEBUG [StoreOpener-4b73e169ebfeed69423385de141758d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf 2023-07-28 11:02:07,652 INFO [StoreOpener-4b73e169ebfeed69423385de141758d8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b73e169ebfeed69423385de141758d8 columnFamilyName cf 2023-07-28 11:02:07,653 INFO [StoreOpener-4b73e169ebfeed69423385de141758d8-1] regionserver.HStore(310): Store=4b73e169ebfeed69423385de141758d8/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-28 11:02:07,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1055): writing seq id for 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:07,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-28 11:02:07,665 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1072): Opened 4b73e169ebfeed69423385de141758d8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=131072, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9428410560, jitterRate=-0.12191084027290344}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-28 11:02:07,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(965): Region open journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:07,667 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., pid=11, masterSystemTime=1690542127631 2023-07-28 11:02:07,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:07,672 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase17:0-0] handler.AssignRegionHandler(158): Opened TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:07,673 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=4b73e169ebfeed69423385de141758d8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:07,673 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.","families":{"info":[{"qualifier":"regioninfo","vlen":40,"tag":[],"timestamp":"1690542127673"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1690542127673"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690542127673"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690542127673"}]},"ts":"1690542127673"} 2023-07-28 11:02:07,681 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-28 11:02:07,682 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 in 202 msec 2023-07-28 11:02:07,686 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-28 11:02:07,686 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=TestCP, region=4b73e169ebfeed69423385de141758d8, ASSIGN in 367 msec 2023-07-28 11:02:07,688 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-28 11:02:07,688 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"TestCP","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690542127688"}]},"ts":"1690542127688"} 2023-07-28 11:02:07,691 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=TestCP, state=ENABLED in hbase:meta 2023-07-28 11:02:07,694 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=TestCP execute state=CREATE_TABLE_POST_OPERATION 2023-07-28 11:02:07,698 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=TestCP in 461 msec 2023-07-28 11:02:07,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42003] master.MasterRpcServices(1230): Checking to see if procedure is done pid=9 2023-07-28 11:02:07,866 INFO [Listener at localhost.localdomain/34871] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:TestCP, procId: 9 completed 2023-07-28 11:02:07,901 INFO [Listener at localhost.localdomain/34871] hbase.ResourceChecker(147): before: coprocessor.example.TestWriteHeavyIncrementObserver#test Thread=414, OpenFileDescriptor=725, MaxFileDescriptor=60000, SystemLoadAverage=262, ProcessCount=166, AvailableMemoryMB=6210 2023-07-28 11:02:07,916 DEBUG [increment-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-28 11:02:07,920 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 136.243.18.41:51772, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-28 11:02:08,120 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:08,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:08,318 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=299 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3 2023-07-28 11:02:08,411 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3 2023-07-28 11:02:08,426 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3, entries=2, sequenceid=299, filesize=4.8 K 2023-07-28 11:02:08,431 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=21.38 KB/21888 for 4b73e169ebfeed69423385de141758d8 in 311ms, sequenceid=299, compaction requested=false 2023-07-28 11:02:08,435 DEBUG [MemStoreFlusher.0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'TestCP' 2023-07-28 11:02:08,438 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:08,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:08,440 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=21.87 KB heapSize=68.28 KB 2023-07-28 11:02:08,579 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.08 KB at sequenceid=616 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/c957459edbd24e94aa567db8ced37441 2023-07-28 11:02:08,597 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/c957459edbd24e94aa567db8ced37441 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/c957459edbd24e94aa567db8ced37441 2023-07-28 11:02:08,619 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/c957459edbd24e94aa567db8ced37441, entries=2, sequenceid=616, filesize=4.8 K 2023-07-28 11:02:08,621 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~22.08 KB/22608, heapSize ~68.92 KB/70576, currentSize=12.87 KB/13176 for 4b73e169ebfeed69423385de141758d8 in 181ms, sequenceid=616, compaction requested=false 2023-07-28 11:02:08,622 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:08,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:08,721 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-28 11:02:08,796 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=913 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/d14c83f2fe4648b2a52d535d61d2199b 2023-07-28 11:02:08,806 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/d14c83f2fe4648b2a52d535d61d2199b as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d14c83f2fe4648b2a52d535d61d2199b 2023-07-28 11:02:08,819 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d14c83f2fe4648b2a52d535d61d2199b, entries=2, sequenceid=913, filesize=4.8 K 2023-07-28 11:02:08,820 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=11.25 KB/11520 for 4b73e169ebfeed69423385de141758d8 in 99ms, sequenceid=913, compaction requested=true 2023-07-28 11:02:08,821 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:08,822 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:08,823 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:08,827 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14718 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:08,830 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:08,830 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:08,831 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/c957459edbd24e94aa567db8ced37441, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d14c83f2fe4648b2a52d535d61d2199b] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.4 K 2023-07-28 11:02:08,833 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting be8dcbb4ca9b4e42b0bb397e3f9765b3, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=299, earliestPutTs=1731115138992128 2023-07-28 11:02:08,834 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting c957459edbd24e94aa567db8ced37441, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=616, earliestPutTs=1731115139195904 2023-07-28 11:02:08,835 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting d14c83f2fe4648b2a52d535d61d2199b, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=913, earliestPutTs=1731115139522561 2023-07-28 11:02:08,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:08,880 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:08,881 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#3 average throughput is 0.03 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:08,941 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=1212 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/5274925c82274706809e1dc7ae12bb0e 2023-07-28 11:02:08,957 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/5274925c82274706809e1dc7ae12bb0e as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5274925c82274706809e1dc7ae12bb0e 2023-07-28 11:02:08,959 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/91dde01646cc437fa459f60c7f3ca1e6 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/91dde01646cc437fa459f60c7f3ca1e6 2023-07-28 11:02:08,968 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5274925c82274706809e1dc7ae12bb0e, entries=2, sequenceid=1212, filesize=4.8 K 2023-07-28 11:02:08,970 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=9 KB/9216 for 4b73e169ebfeed69423385de141758d8 in 90ms, sequenceid=1212, compaction requested=false 2023-07-28 11:02:08,971 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:08,982 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 91dde01646cc437fa459f60c7f3ca1e6(size=4.8 K), total size for store is 9.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:08,983 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:08,983 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542128822; duration=0sec 2023-07-28 11:02:08,984 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:09,021 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.67 KB heapSize=64.56 KB 2023-07-28 11:02:09,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:09,061 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=1512 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/7f6c5a8338204fa38160c57c14af0e30 2023-07-28 11:02:09,074 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/7f6c5a8338204fa38160c57c14af0e30 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7f6c5a8338204fa38160c57c14af0e30 2023-07-28 11:02:09,086 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7f6c5a8338204fa38160c57c14af0e30, entries=2, sequenceid=1512, filesize=4.8 K 2023-07-28 11:02:09,088 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=8.58 KB/8784 for 4b73e169ebfeed69423385de141758d8 in 66ms, sequenceid=1512, compaction requested=true 2023-07-28 11:02:09,088 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,088 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:09,088 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:09,095 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14759 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:09,095 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:09,096 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:09,096 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/91dde01646cc437fa459f60c7f3ca1e6, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5274925c82274706809e1dc7ae12bb0e, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7f6c5a8338204fa38160c57c14af0e30] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.4 K 2023-07-28 11:02:09,098 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 91dde01646cc437fa459f60c7f3ca1e6, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=913, earliestPutTs=1731115138992128 2023-07-28 11:02:09,100 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 5274925c82274706809e1dc7ae12bb0e, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1212, earliestPutTs=1731115139811328 2023-07-28 11:02:09,101 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 7f6c5a8338204fa38160c57c14af0e30, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1512, earliestPutTs=1731115139974144 2023-07-28 11:02:09,137 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#6 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:09,216 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/4e6027aa15b74a2bbadd430a49095946 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4e6027aa15b74a2bbadd430a49095946 2023-07-28 11:02:09,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:09,217 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:09,234 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 4e6027aa15b74a2bbadd430a49095946(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:09,236 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,237 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542129088; duration=0sec 2023-07-28 11:02:09,238 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:09,260 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=1809 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/a4ef92a70c184f5c8d67c7210a3abd8c 2023-07-28 11:02:09,272 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/a4ef92a70c184f5c8d67c7210a3abd8c as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a4ef92a70c184f5c8d67c7210a3abd8c 2023-07-28 11:02:09,285 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a4ef92a70c184f5c8d67c7210a3abd8c, entries=2, sequenceid=1809, filesize=4.8 K 2023-07-28 11:02:09,288 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=7.80 KB/7992 for 4b73e169ebfeed69423385de141758d8 in 71ms, sequenceid=1809, compaction requested=false 2023-07-28 11:02:09,288 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:09,354 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-28 11:02:09,419 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=2107 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/0ec2c0c398b4462d8821be57303e7685 2023-07-28 11:02:09,429 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/0ec2c0c398b4462d8821be57303e7685 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/0ec2c0c398b4462d8821be57303e7685 2023-07-28 11:02:09,440 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/0ec2c0c398b4462d8821be57303e7685, entries=2, sequenceid=2107, filesize=4.8 K 2023-07-28 11:02:09,442 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=8.09 KB/8280 for 4b73e169ebfeed69423385de141758d8 in 88ms, sequenceid=2107, compaction requested=true 2023-07-28 11:02:09,442 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,442 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:09,442 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:09,444 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14862 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:09,444 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:09,445 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:09,445 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4e6027aa15b74a2bbadd430a49095946, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a4ef92a70c184f5c8d67c7210a3abd8c, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/0ec2c0c398b4462d8821be57303e7685] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.5 K 2023-07-28 11:02:09,446 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 4e6027aa15b74a2bbadd430a49095946, keycount=2, bloomtype=ROW, size=4.9 K, encoding=NONE, compression=NONE, seqNum=1512, earliestPutTs=1731115138992128 2023-07-28 11:02:09,446 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting a4ef92a70c184f5c8d67c7210a3abd8c, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=1809, earliestPutTs=1731115140117506 2023-07-28 11:02:09,447 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 0ec2c0c398b4462d8821be57303e7685, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2107, earliestPutTs=1731115140318208 2023-07-28 11:02:09,471 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#9 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:09,512 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/41b25809edda407db11625a6e5fa1364 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/41b25809edda407db11625a6e5fa1364 2023-07-28 11:02:09,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:09,522 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:09,533 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 41b25809edda407db11625a6e5fa1364(size=5.0 K), total size for store is 5.0 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:09,533 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,533 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542129442; duration=0sec 2023-07-28 11:02:09,533 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:09,569 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=2406 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/31be6da75fe449299320ad6acddc2f10 2023-07-28 11:02:09,580 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/31be6da75fe449299320ad6acddc2f10 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/31be6da75fe449299320ad6acddc2f10 2023-07-28 11:02:09,589 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/31be6da75fe449299320ad6acddc2f10, entries=2, sequenceid=2406, filesize=4.8 K 2023-07-28 11:02:09,591 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=7.10 KB/7272 for 4b73e169ebfeed69423385de141758d8 in 68ms, sequenceid=2406, compaction requested=false 2023-07-28 11:02:09,591 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:09,630 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-28 11:02:09,667 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=2704 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/b578cdf5cb4d48c2bf4a5dc904abe451 2023-07-28 11:02:09,679 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/b578cdf5cb4d48c2bf4a5dc904abe451 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b578cdf5cb4d48c2bf4a5dc904abe451 2023-07-28 11:02:09,687 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b578cdf5cb4d48c2bf4a5dc904abe451, entries=2, sequenceid=2704, filesize=4.8 K 2023-07-28 11:02:09,689 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=9.21 KB/9432 for 4b73e169ebfeed69423385de141758d8 in 59ms, sequenceid=2704, compaction requested=true 2023-07-28 11:02:09,689 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,690 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:09,690 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:09,692 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14964 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:09,692 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:09,692 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:09,692 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/41b25809edda407db11625a6e5fa1364, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/31be6da75fe449299320ad6acddc2f10, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b578cdf5cb4d48c2bf4a5dc904abe451] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.6 K 2023-07-28 11:02:09,693 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 41b25809edda407db11625a6e5fa1364, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=2107, earliestPutTs=1731115138992128 2023-07-28 11:02:09,694 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 31be6da75fe449299320ad6acddc2f10, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2406, earliestPutTs=1731115140458496 2023-07-28 11:02:09,695 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting b578cdf5cb4d48c2bf4a5dc904abe451, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=2704, earliestPutTs=1731115140636672 2023-07-28 11:02:09,714 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#12 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:09,779 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/bfe6c9ff0ea94075bb6224bcf28cc58e as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/bfe6c9ff0ea94075bb6224bcf28cc58e 2023-07-28 11:02:09,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:09,787 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=21.30 KB heapSize=66.53 KB 2023-07-28 11:02:09,810 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into bfe6c9ff0ea94075bb6224bcf28cc58e(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:09,810 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,810 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542129689; duration=0sec 2023-07-28 11:02:09,810 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:09,867 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.59 KB at sequenceid=3014 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/4859a499b12a406a9ff9df692b875ec7 2023-07-28 11:02:09,880 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/4859a499b12a406a9ff9df692b875ec7 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4859a499b12a406a9ff9df692b875ec7 2023-07-28 11:02:09,899 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4859a499b12a406a9ff9df692b875ec7, entries=2, sequenceid=3014, filesize=4.8 K 2023-07-28 11:02:09,902 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.59 KB/22104, heapSize ~67.39 KB/69008, currentSize=14.77 KB/15120 for 4b73e169ebfeed69423385de141758d8 in 115ms, sequenceid=3014, compaction requested=false 2023-07-28 11:02:09,902 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:09,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:09,931 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-28 11:02:09,989 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=3313 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/5298d72a1f054b4d877caa47e957f329 2023-07-28 11:02:10,005 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/5298d72a1f054b4d877caa47e957f329 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5298d72a1f054b4d877caa47e957f329 2023-07-28 11:02:10,021 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5298d72a1f054b4d877caa47e957f329, entries=2, sequenceid=3313, filesize=4.8 K 2023-07-28 11:02:10,025 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=11.74 KB/12024 for 4b73e169ebfeed69423385de141758d8 in 94ms, sequenceid=3313, compaction requested=true 2023-07-28 11:02:10,026 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:10,026 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:10,026 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:10,029 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15066 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:10,029 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:10,029 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:10,029 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/bfe6c9ff0ea94075bb6224bcf28cc58e, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4859a499b12a406a9ff9df692b875ec7, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5298d72a1f054b4d877caa47e957f329] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.7 K 2023-07-28 11:02:10,031 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting bfe6c9ff0ea94075bb6224bcf28cc58e, keycount=2, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=2704, earliestPutTs=1731115138992128 2023-07-28 11:02:10,033 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 4859a499b12a406a9ff9df692b875ec7, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3014, earliestPutTs=1731115140741125 2023-07-28 11:02:10,034 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 5298d72a1f054b4d877caa47e957f329, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3313, earliestPutTs=1731115140902912 2023-07-28 11:02:10,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:10,081 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#15 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:10,078 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:10,377 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 4458 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542190376, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,381 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,381 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4461 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190380, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,382 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4462 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190380, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,382 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4463 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190380, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,381 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,383 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 4460 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190376, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4464 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190380, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 4459 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190376, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,385 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 4465 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542190385, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,385 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 4466 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190385, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,385 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4467 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542190385, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,505 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4478 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,505 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4481 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,505 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,505 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 4480 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,506 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,507 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 4479 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 4483 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,507 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 4484 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,507 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 4486 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190506, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 4482 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,507 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 4485 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542190505, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,508 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:10,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 4487 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542190506, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:10,541 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/14ab693eef4648d4ba0a7b921533fe57 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/14ab693eef4648d4ba0a7b921533fe57 2023-07-28 11:02:10,543 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.09 KB at sequenceid=3616 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/37a0800493ae4bc2b4430ecfdcaeb13e 2023-07-28 11:02:10,551 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 14ab693eef4648d4ba0a7b921533fe57(size=5.2 K), total size for store is 5.2 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:10,551 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:10,551 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542130026; duration=0sec 2023-07-28 11:02:10,551 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:10,553 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/37a0800493ae4bc2b4430ecfdcaeb13e as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/37a0800493ae4bc2b4430ecfdcaeb13e 2023-07-28 11:02:10,561 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/37a0800493ae4bc2b4430ecfdcaeb13e, entries=2, sequenceid=3616, filesize=4.8 K 2023-07-28 11:02:10,562 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.09 KB/21600, heapSize ~65.86 KB/67440, currentSize=61.24 KB/62712 for 4b73e169ebfeed69423385de141758d8 in 484ms, sequenceid=3616, compaction requested=false 2023-07-28 11:02:10,562 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:10,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:10,709 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=61.38 KB heapSize=191.22 KB 2023-07-28 11:02:10,762 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=61.45 KB at sequenceid=4494 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/6c72cb89706846c982dec393c2c33a51 2023-07-28 11:02:10,778 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/6c72cb89706846c982dec393c2c33a51 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6c72cb89706846c982dec393c2c33a51 2023-07-28 11:02:10,797 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6c72cb89706846c982dec393c2c33a51, entries=2, sequenceid=4494, filesize=4.8 K 2023-07-28 11:02:10,799 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~61.45 KB/62928, heapSize ~191.42 KB/196016, currentSize=10.76 KB/11016 for 4b73e169ebfeed69423385de141758d8 in 90ms, sequenceid=4494, compaction requested=true 2023-07-28 11:02:10,799 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:10,800 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:10,800 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:10,803 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15168 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:10,803 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:10,803 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:10,804 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/14ab693eef4648d4ba0a7b921533fe57, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/37a0800493ae4bc2b4430ecfdcaeb13e, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6c72cb89706846c982dec393c2c33a51] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.8 K 2023-07-28 11:02:10,805 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 14ab693eef4648d4ba0a7b921533fe57, keycount=2, bloomtype=ROW, size=5.2 K, encoding=NONE, compression=NONE, seqNum=3313, earliestPutTs=1731115138992128 2023-07-28 11:02:10,806 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 37a0800493ae4bc2b4430ecfdcaeb13e, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=3616, earliestPutTs=1731115141049345 2023-07-28 11:02:10,807 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 6c72cb89706846c982dec393c2c33a51, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4494, earliestPutTs=1731115141203970 2023-07-28 11:02:10,848 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#18 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:10,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:10,873 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:10,911 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/db9abf087fcd4a6c805b3aed6230ae5f as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/db9abf087fcd4a6c805b3aed6230ae5f 2023-07-28 11:02:10,916 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=21.73 KB at sequenceid=4806 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/6abfbfaef88a47f48a4baf7bc57b5f90 2023-07-28 11:02:10,930 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into db9abf087fcd4a6c805b3aed6230ae5f(size=5.3 K), total size for store is 5.3 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:10,930 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:10,930 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542130800; duration=0sec 2023-07-28 11:02:10,930 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:10,940 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/6abfbfaef88a47f48a4baf7bc57b5f90 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6abfbfaef88a47f48a4baf7bc57b5f90 2023-07-28 11:02:10,980 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6abfbfaef88a47f48a4baf7bc57b5f90, entries=2, sequenceid=4806, filesize=4.8 K 2023-07-28 11:02:10,983 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~21.73 KB/22248, heapSize ~67.83 KB/69456, currentSize=9.98 KB/10224 for 4b73e169ebfeed69423385de141758d8 in 110ms, sequenceid=4806, compaction requested=false 2023-07-28 11:02:10,983 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:11,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:11,038 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:11,111 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=5106 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/d4455de3b6af48c7af2fdcfe63373353 2023-07-28 11:02:11,121 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/d4455de3b6af48c7af2fdcfe63373353 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d4455de3b6af48c7af2fdcfe63373353 2023-07-28 11:02:11,132 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d4455de3b6af48c7af2fdcfe63373353, entries=2, sequenceid=5106, filesize=4.8 K 2023-07-28 11:02:11,133 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=19.55 KB/20016 for 4b73e169ebfeed69423385de141758d8 in 95ms, sequenceid=5106, compaction requested=true 2023-07-28 11:02:11,134 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:11,134 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:11,134 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:11,135 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15270 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:11,136 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:11,136 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:11,136 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/db9abf087fcd4a6c805b3aed6230ae5f, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6abfbfaef88a47f48a4baf7bc57b5f90, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d4455de3b6af48c7af2fdcfe63373353] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.9 K 2023-07-28 11:02:11,137 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting db9abf087fcd4a6c805b3aed6230ae5f, keycount=2, bloomtype=ROW, size=5.3 K, encoding=NONE, compression=NONE, seqNum=4494, earliestPutTs=1731115138992128 2023-07-28 11:02:11,137 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 6abfbfaef88a47f48a4baf7bc57b5f90, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=4806, earliestPutTs=1731115141847040 2023-07-28 11:02:11,138 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting d4455de3b6af48c7af2fdcfe63373353, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5106, earliestPutTs=1731115142018050 2023-07-28 11:02:11,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:11,147 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:11,161 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#21 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:11,333 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/605416f2874d494282b8924c271a3bbe as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/605416f2874d494282b8924c271a3bbe 2023-07-28 11:02:11,354 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 605416f2874d494282b8924c271a3bbe(size=5.4 K), total size for store is 5.4 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:11,354 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:11,354 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542131134; duration=0sec 2023-07-28 11:02:11,354 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:11,516 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,516 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,516 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 6276 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191515, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6277 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191515, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 6278 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542191515, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,517 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,517 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 6281 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542191516, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,517 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,517 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 6282 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191516, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 6279 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542191516, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,517 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6280 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191516, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 6283 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191516, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,518 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,517 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 6284 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191516, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6285 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191516, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,619 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,619 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,619 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6290 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542191619, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,620 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6293 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191619, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,620 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6296 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191620, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,621 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6297 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542191620, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 6292 service: ClientService methodName: Mutate size: 200 connection: 136.243.18.41:51772 deadline: 1690542191619, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 6291 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191619, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,622 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 6299 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191622, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,642 WARN [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,642 WARN [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] ipc.CallRunner(144): callId: 6303 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191642, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] ipc.CallRunner(144): callId: 6305 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191642, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,643 WARN [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(4965): Region is too busy due to exceeding memstore size limit. org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 at org.apache.hadoop.hbase.regionserver.HRegion.checkResources(HRegion.java:4963) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:8417) at org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:713) at org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2966) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:44994) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-28 11:02:11,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] ipc.CallRunner(144): callId: 6304 service: ClientService methodName: Mutate size: 199 connection: 136.243.18.41:51772 deadline: 1690542191642, exception=org.apache.hadoop.hbase.RegionTooBusyException: Over memstore limit=256.0 K, regionName=4b73e169ebfeed69423385de141758d8, server=jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:11,678 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=5405 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/7b3d99560c11488982bf904a7a95bd8b 2023-07-28 11:02:11,687 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/7b3d99560c11488982bf904a7a95bd8b as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7b3d99560c11488982bf904a7a95bd8b 2023-07-28 11:02:11,702 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7b3d99560c11488982bf904a7a95bd8b, entries=2, sequenceid=5405, filesize=4.8 K 2023-07-28 11:02:11,705 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=61.45 KB/62928 for 4b73e169ebfeed69423385de141758d8 in 558ms, sequenceid=5405, compaction requested=false 2023-07-28 11:02:11,705 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:11,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:11,824 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=61.66 KB heapSize=192.09 KB 2023-07-28 11:02:11,846 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-28 11:02:11,878 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=61.88 KB at sequenceid=6289 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/42dd36caa83c4020be93f18ca9a30af3 2023-07-28 11:02:11,888 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/42dd36caa83c4020be93f18ca9a30af3 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/42dd36caa83c4020be93f18ca9a30af3 2023-07-28 11:02:11,899 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/42dd36caa83c4020be93f18ca9a30af3, entries=2, sequenceid=6289, filesize=4.8 K 2023-07-28 11:02:11,901 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~61.88 KB/63360, heapSize ~192.73 KB/197360, currentSize=7.95 KB/8136 for 4b73e169ebfeed69423385de141758d8 in 77ms, sequenceid=6289, compaction requested=true 2023-07-28 11:02:11,901 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:11,901 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:11,901 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:11,905 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15372 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:11,905 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:11,905 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:11,905 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/605416f2874d494282b8924c271a3bbe, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7b3d99560c11488982bf904a7a95bd8b, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/42dd36caa83c4020be93f18ca9a30af3] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=15.0 K 2023-07-28 11:02:11,908 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 605416f2874d494282b8924c271a3bbe, keycount=2, bloomtype=ROW, size=5.4 K, encoding=NONE, compression=NONE, seqNum=5106, earliestPutTs=1731115138992128 2023-07-28 11:02:11,909 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 7b3d99560c11488982bf904a7a95bd8b, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=5405, earliestPutTs=1731115142182914 2023-07-28 11:02:11,910 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 42dd36caa83c4020be93f18ca9a30af3, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=6289, earliestPutTs=1731115142295552 2023-07-28 11:02:11,950 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#24 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:11,989 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-28 11:02:11,997 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-28 11:02:12,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,005 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:12,008 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/054976795ebb4ca5b677aa6b1ee43f59 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/054976795ebb4ca5b677aa6b1ee43f59 2023-07-28 11:02:12,024 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 054976795ebb4ca5b677aa6b1ee43f59(size=5.5 K), total size for store is 5.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:12,024 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,024 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542131901; duration=0sec 2023-07-28 11:02:12,025 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:12,069 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=6587 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/732edf8824424e18b99cba0d06742e48 2023-07-28 11:02:12,080 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/732edf8824424e18b99cba0d06742e48 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/732edf8824424e18b99cba0d06742e48 2023-07-28 11:02:12,090 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/732edf8824424e18b99cba0d06742e48, entries=2, sequenceid=6587, filesize=4.8 K 2023-07-28 11:02:12,091 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=13.64 KB/13968 for 4b73e169ebfeed69423385de141758d8 in 86ms, sequenceid=6587, compaction requested=false 2023-07-28 11:02:12,091 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,139 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-28 11:02:12,173 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=6886 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/b435738d745d4083ba435e7cb2f47ce1 2023-07-28 11:02:12,184 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/b435738d745d4083ba435e7cb2f47ce1 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b435738d745d4083ba435e7cb2f47ce1 2023-07-28 11:02:12,194 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b435738d745d4083ba435e7cb2f47ce1, entries=2, sequenceid=6886, filesize=4.8 K 2023-07-28 11:02:12,195 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=11.60 KB/11880 for 4b73e169ebfeed69423385de141758d8 in 56ms, sequenceid=6886, compaction requested=true 2023-07-28 11:02:12,195 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,195 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:12,195 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:12,197 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15474 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:12,197 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:12,197 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:12,198 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/054976795ebb4ca5b677aa6b1ee43f59, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/732edf8824424e18b99cba0d06742e48, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b435738d745d4083ba435e7cb2f47ce1] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=15.1 K 2023-07-28 11:02:12,198 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 054976795ebb4ca5b677aa6b1ee43f59, keycount=2, bloomtype=ROW, size=5.5 K, encoding=NONE, compression=NONE, seqNum=6289, earliestPutTs=1731115138992128 2023-07-28 11:02:12,199 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 732edf8824424e18b99cba0d06742e48, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=6587, earliestPutTs=1731115142988800 2023-07-28 11:02:12,199 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting b435738d745d4083ba435e7cb2f47ce1, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=6886, earliestPutTs=1731115143173121 2023-07-28 11:02:12,218 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#27 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:12,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,235 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-28 11:02:12,248 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/27b3989d39f648ae815a2aca56f89417 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/27b3989d39f648ae815a2aca56f89417 2023-07-28 11:02:12,257 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 27b3989d39f648ae815a2aca56f89417(size=5.6 K), total size for store is 5.6 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:12,257 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,258 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542132195; duration=0sec 2023-07-28 11:02:12,258 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:12,280 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.67 KB at sequenceid=7183 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/e22352dc44914446a27217ad925fb94c 2023-07-28 11:02:12,289 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/e22352dc44914446a27217ad925fb94c as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/e22352dc44914446a27217ad925fb94c 2023-07-28 11:02:12,301 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/e22352dc44914446a27217ad925fb94c, entries=2, sequenceid=7183, filesize=4.8 K 2023-07-28 11:02:12,302 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.67 KB/21168, heapSize ~64.55 KB/66096, currentSize=15.89 KB/16272 for 4b73e169ebfeed69423385de141758d8 in 67ms, sequenceid=7183, compaction requested=false 2023-07-28 11:02:12,302 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,318 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:12,390 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=7483 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/3f2bf805c1a5433cb274f215fa9b1a39 2023-07-28 11:02:12,398 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/3f2bf805c1a5433cb274f215fa9b1a39 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/3f2bf805c1a5433cb274f215fa9b1a39 2023-07-28 11:02:12,405 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/3f2bf805c1a5433cb274f215fa9b1a39, entries=2, sequenceid=7483, filesize=4.8 K 2023-07-28 11:02:12,406 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=30.38 KB/31104 for 4b73e169ebfeed69423385de141758d8 in 88ms, sequenceid=7483, compaction requested=true 2023-07-28 11:02:12,406 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,406 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:12,407 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:12,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,407 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=30.66 KB heapSize=95.63 KB 2023-07-28 11:02:12,408 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 15576 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:12,409 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:12,409 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:12,409 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/27b3989d39f648ae815a2aca56f89417, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/e22352dc44914446a27217ad925fb94c, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/3f2bf805c1a5433cb274f215fa9b1a39] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=15.2 K 2023-07-28 11:02:12,409 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 27b3989d39f648ae815a2aca56f89417, keycount=2, bloomtype=ROW, size=5.6 K, encoding=NONE, compression=NONE, seqNum=6886, earliestPutTs=1731115138992128 2023-07-28 11:02:12,410 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting e22352dc44914446a27217ad925fb94c, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7183, earliestPutTs=1731115143310336 2023-07-28 11:02:12,410 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 3f2bf805c1a5433cb274f215fa9b1a39, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7483, earliestPutTs=1731115143408641 2023-07-28 11:02:12,423 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#31 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:12,455 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=30.87 KB at sequenceid=7925 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/a5a95c956c4c4a9683f6e2b392d16866 2023-07-28 11:02:12,465 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/a5a95c956c4c4a9683f6e2b392d16866 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a5a95c956c4c4a9683f6e2b392d16866 2023-07-28 11:02:12,472 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a5a95c956c4c4a9683f6e2b392d16866, entries=2, sequenceid=7925, filesize=4.8 K 2023-07-28 11:02:12,474 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~30.87 KB/31608, heapSize ~96.27 KB/98576, currentSize=20.32 KB/20808 for 4b73e169ebfeed69423385de141758d8 in 67ms, sequenceid=7925, compaction requested=false 2023-07-28 11:02:12,474 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,475 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.53 KB heapSize=64.13 KB 2023-07-28 11:02:12,560 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.74 KB at sequenceid=8223 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/eb86d957f9114c7598d6ceeb7a368321 2023-07-28 11:02:12,568 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/eb86d957f9114c7598d6ceeb7a368321 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/eb86d957f9114c7598d6ceeb7a368321 2023-07-28 11:02:12,588 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/eb86d957f9114c7598d6ceeb7a368321, entries=2, sequenceid=8223, filesize=4.8 K 2023-07-28 11:02:12,590 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.74 KB/21240, heapSize ~64.77 KB/66320, currentSize=26.37 KB/27000 for 4b73e169ebfeed69423385de141758d8 in 115ms, sequenceid=8223, compaction requested=false 2023-07-28 11:02:12,591 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,591 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=26.51 KB heapSize=82.72 KB 2023-07-28 11:02:12,668 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=26.65 KB at sequenceid=8605 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/98033c0be4184394a53c9d1115c1996a 2023-07-28 11:02:12,685 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/98033c0be4184394a53c9d1115c1996a as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/98033c0be4184394a53c9d1115c1996a 2023-07-28 11:02:12,696 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/98033c0be4184394a53c9d1115c1996a, entries=2, sequenceid=8605, filesize=4.8 K 2023-07-28 11:02:12,697 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~26.65 KB/27288, heapSize ~83.14 KB/85136, currentSize=25.24 KB/25848 for 4b73e169ebfeed69423385de141758d8 in 106ms, sequenceid=8605, compaction requested=true 2023-07-28 11:02:12,697 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,697 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-28 11:02:12,698 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 6 store files, 3 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:12,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,701 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=25.31 KB heapSize=79 KB 2023-07-28 11:02:12,701 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14718 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:12,701 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction 2023-07-28 11:02:12,701 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:12,701 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a5a95c956c4c4a9683f6e2b392d16866, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/eb86d957f9114c7598d6ceeb7a368321, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/98033c0be4184394a53c9d1115c1996a] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.4 K 2023-07-28 11:02:12,702 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.Compactor(207): Compacting a5a95c956c4c4a9683f6e2b392d16866, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=7925 2023-07-28 11:02:12,702 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.Compactor(207): Compacting eb86d957f9114c7598d6ceeb7a368321, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8223 2023-07-28 11:02:12,702 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.Compactor(207): Compacting 98033c0be4184394a53c9d1115c1996a, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8605 2023-07-28 11:02:12,739 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver 2023-07-28 11:02:12,745 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#35 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:12,749 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver Metrics about HBase RegionObservers 2023-07-28 11:02:12,750 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-28 11:02:12,750 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-28 11:02:12,764 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=25.52 KB at sequenceid=8971 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/988399e14e4e4aad83b05b11c0f4ac17 2023-07-28 11:02:12,781 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/988399e14e4e4aad83b05b11c0f4ac17 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/988399e14e4e4aad83b05b11c0f4ac17 2023-07-28 11:02:12,787 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/988399e14e4e4aad83b05b11c0f4ac17, entries=2, sequenceid=8971, filesize=4.8 K 2023-07-28 11:02:12,789 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~25.52 KB/26136, heapSize ~79.64 KB/81552, currentSize=18.63 KB/19080 for 4b73e169ebfeed69423385de141758d8 in 87ms, sequenceid=8971, compaction requested=false 2023-07-28 11:02:12,789 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,795 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:12,836 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/dfbf8c20954d4796837a2795b1ea0b41 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/dfbf8c20954d4796837a2795b1ea0b41 2023-07-28 11:02:12,846 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into dfbf8c20954d4796837a2795b1ea0b41(size=4.8 K), total size for store is 24.8 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:12,846 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,846 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=10, startTime=1690542132697; duration=0sec 2023-07-28 11:02:12,846 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:12,864 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.95 KB at sequenceid=9272 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/1d127e517971484191d2f362eb25a0a5 2023-07-28 11:02:12,872 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/1d127e517971484191d2f362eb25a0a5 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1d127e517971484191d2f362eb25a0a5 2023-07-28 11:02:12,879 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1d127e517971484191d2f362eb25a0a5, entries=2, sequenceid=9272, filesize=4.8 K 2023-07-28 11:02:12,880 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.95 KB/21456, heapSize ~65.42 KB/66992, currentSize=19.83 KB/20304 for 4b73e169ebfeed69423385de141758d8 in 85ms, sequenceid=9272, compaction requested=true 2023-07-28 11:02:12,881 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,881 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-28 11:02:12,881 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 6 store files, 3 compacting, 3 eligible, 16 blocking 2023-07-28 11:02:12,882 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 14759 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-28 11:02:12,882 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction 2023-07-28 11:02:12,882 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:12,882 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/dfbf8c20954d4796837a2795b1ea0b41, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/988399e14e4e4aad83b05b11c0f4ac17, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1d127e517971484191d2f362eb25a0a5] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=14.4 K 2023-07-28 11:02:12,883 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.Compactor(207): Compacting dfbf8c20954d4796837a2795b1ea0b41, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8605 2023-07-28 11:02:12,883 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.Compactor(207): Compacting 988399e14e4e4aad83b05b11c0f4ac17, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=8971 2023-07-28 11:02:12,883 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] compactions.Compactor(207): Compacting 1d127e517971484191d2f362eb25a0a5, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9272 2023-07-28 11:02:12,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,886 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.67 KB heapSize=64.56 KB 2023-07-28 11:02:12,901 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#38 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:12,924 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/13c2e8a383bf4bbbaab2bdd8b798d47c as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/13c2e8a383bf4bbbaab2bdd8b798d47c 2023-07-28 11:02:12,932 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 13c2e8a383bf4bbbaab2bdd8b798d47c(size=5.7 K), total size for store is 20.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:12,932 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,932 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=13, startTime=1690542132406; duration=0sec 2023-07-28 11:02:12,932 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:12,944 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=9572 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/8e495bbc144d40d08bd67af2c7b95678 2023-07-28 11:02:12,952 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/8e495bbc144d40d08bd67af2c7b95678 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/8e495bbc144d40d08bd67af2c7b95678 2023-07-28 11:02:12,958 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/4a398fcefa814816a389a50457f8a290 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4a398fcefa814816a389a50457f8a290 2023-07-28 11:02:12,962 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/8e495bbc144d40d08bd67af2c7b95678, entries=2, sequenceid=9572, filesize=4.8 K 2023-07-28 11:02:12,963 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=19.76 KB/20232 for 4b73e169ebfeed69423385de141758d8 in 77ms, sequenceid=9572, compaction requested=false 2023-07-28 11:02:12,964 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,967 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 4a398fcefa814816a389a50457f8a290(size=4.9 K), total size for store is 15.5 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:12,967 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:12,967 INFO [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=10, startTime=1690542132881; duration=0sec 2023-07-28 11:02:12,968 DEBUG [RS:2;jenkins-hbase17:46497-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:12,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46497] regionserver.HRegion(9158): Flush requested on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:12,968 INFO [MemStoreFlusher.0] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=20.60 KB heapSize=64.34 KB 2023-07-28 11:02:13,004 INFO [MemStoreFlusher.0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=20.81 KB at sequenceid=9873 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/1a029448edaa4d5aa968ae4f1e2bdbf6 2023-07-28 11:02:13,014 DEBUG [MemStoreFlusher.0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/1a029448edaa4d5aa968ae4f1e2bdbf6 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1a029448edaa4d5aa968ae4f1e2bdbf6 2023-07-28 11:02:13,022 INFO [MemStoreFlusher.0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1a029448edaa4d5aa968ae4f1e2bdbf6, entries=2, sequenceid=9873, filesize=4.8 K 2023-07-28 11:02:13,026 INFO [MemStoreFlusher.0] regionserver.HRegion(2948): Finished flush of dataSize ~20.81 KB/21312, heapSize ~64.98 KB/66544, currentSize=14.63 KB/14976 for 4b73e169ebfeed69423385de141758d8 in 57ms, sequenceid=9873, compaction requested=true 2023-07-28 11:02:13,026 DEBUG [MemStoreFlusher.0] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:13,026 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 4 store files, 0 compacting, 4 eligible, 16 blocking 2023-07-28 11:02:13,027 DEBUG [MemStoreFlusher.0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: MemStoreFlusher.0; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:13,028 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 4 files of size 20728 starting at candidate #0 after considering 3 permutations with 3 in ratio 2023-07-28 11:02:13,028 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating minor compaction (all files) 2023-07-28 11:02:13,028 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:13,029 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/13c2e8a383bf4bbbaab2bdd8b798d47c, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4a398fcefa814816a389a50457f8a290, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/8e495bbc144d40d08bd67af2c7b95678, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1a029448edaa4d5aa968ae4f1e2bdbf6] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=20.2 K 2023-07-28 11:02:13,029 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 13c2e8a383bf4bbbaab2bdd8b798d47c, keycount=2, bloomtype=ROW, size=5.7 K, encoding=NONE, compression=NONE, seqNum=7483, earliestPutTs=1731115138992128 2023-07-28 11:02:13,030 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 4a398fcefa814816a389a50457f8a290, keycount=2, bloomtype=ROW, size=4.9 K, encoding=NONE, compression=NONE, seqNum=9272, earliestPutTs=1731115143493634 2023-07-28 11:02:13,030 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 8e495bbc144d40d08bd67af2c7b95678, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9572, earliestPutTs=1731115143982082 2023-07-28 11:02:13,031 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] compactions.Compactor(207): Compacting 1a029448edaa4d5aa968ae4f1e2bdbf6, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=9873, earliestPutTs=1731115144075264 2023-07-28 11:02:13,055 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#40 average throughput is unlimited, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:13,071 INFO [Listener at localhost.localdomain/34871] regionserver.HRegion(2745): Flushing 4b73e169ebfeed69423385de141758d8 1/1 column families, dataSize=15.75 KB heapSize=49.25 KB 2023-07-28 11:02:13,083 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/44d767a51d6b4c36920d8a32ac5a5f34 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/44d767a51d6b4c36920d8a32ac5a5f34 2023-07-28 11:02:13,090 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 4 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 44d767a51d6b4c36920d8a32ac5a5f34(size=6.1 K), total size for store is 6.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:13,090 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:13,090 INFO [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8., storeName=4b73e169ebfeed69423385de141758d8/cf, priority=12, startTime=1690542133026; duration=0sec 2023-07-28 11:02:13,090 DEBUG [RS:2;jenkins-hbase17:46497-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-28 11:02:13,103 INFO [Listener at localhost.localdomain/34871] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.75 KB at sequenceid=10100 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/1e316082e8584192a866b180a8f9fd92 2023-07-28 11:02:13,110 DEBUG [Listener at localhost.localdomain/34871] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/1e316082e8584192a866b180a8f9fd92 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1e316082e8584192a866b180a8f9fd92 2023-07-28 11:02:13,116 INFO [Listener at localhost.localdomain/34871] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1e316082e8584192a866b180a8f9fd92, entries=2, sequenceid=10100, filesize=4.8 K 2023-07-28 11:02:13,118 INFO [Listener at localhost.localdomain/34871] regionserver.HRegion(2948): Finished flush of dataSize ~15.75 KB/16128, heapSize ~49.23 KB/50416, currentSize=0 B/0 for 4b73e169ebfeed69423385de141758d8 in 46ms, sequenceid=10100, compaction requested=false 2023-07-28 11:02:13,118 DEBUG [Listener at localhost.localdomain/34871] regionserver.HRegion(2446): Flush status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:13,118 DEBUG [Listener at localhost.localdomain/34871] compactions.SortedCompactionPolicy(75): Selecting compaction from 2 store files, 0 compacting, 2 eligible, 16 blocking 2023-07-28 11:02:13,118 DEBUG [Listener at localhost.localdomain/34871] regionserver.HStore(1912): 4b73e169ebfeed69423385de141758d8/cf is initiating major compaction (all files) 2023-07-28 11:02:13,118 INFO [Listener at localhost.localdomain/34871] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-28 11:02:13,118 INFO [Listener at localhost.localdomain/34871] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-28 11:02:13,119 INFO [Listener at localhost.localdomain/34871] regionserver.HRegion(2259): Starting compaction of 4b73e169ebfeed69423385de141758d8/cf in TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:13,119 INFO [Listener at localhost.localdomain/34871] regionserver.HStore(1468): Starting compaction of [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/44d767a51d6b4c36920d8a32ac5a5f34, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1e316082e8584192a866b180a8f9fd92] into tmpdir=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp, totalSize=10.9 K 2023-07-28 11:02:13,120 DEBUG [Listener at localhost.localdomain/34871] compactions.Compactor(207): Compacting 44d767a51d6b4c36920d8a32ac5a5f34, keycount=2, bloomtype=ROW, size=6.1 K, encoding=NONE, compression=NONE, seqNum=9873, earliestPutTs=1731115138992128 2023-07-28 11:02:13,120 DEBUG [Listener at localhost.localdomain/34871] compactions.Compactor(207): Compacting 1e316082e8584192a866b180a8f9fd92, keycount=2, bloomtype=ROW, size=4.8 K, encoding=NONE, compression=NONE, seqNum=10100, earliestPutTs=1731115144159235 2023-07-28 11:02:13,128 INFO [Listener at localhost.localdomain/34871] throttle.PressureAwareThroughputController(145): 4b73e169ebfeed69423385de141758d8#cf#compaction#42 average throughput is 0.07 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-28 11:02:13,142 DEBUG [Listener at localhost.localdomain/34871] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/.tmp/cf/1a37eb9fa99f4c2d96ea8d098fba9f4e as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1a37eb9fa99f4c2d96ea8d098fba9f4e 2023-07-28 11:02:13,150 INFO [Listener at localhost.localdomain/34871] regionserver.HStore(1652): Completed major compaction of 2 (all) file(s) in 4b73e169ebfeed69423385de141758d8/cf of 4b73e169ebfeed69423385de141758d8 into 1a37eb9fa99f4c2d96ea8d098fba9f4e(size=6.1 K), total size for store is 6.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-28 11:02:13,150 DEBUG [Listener at localhost.localdomain/34871] regionserver.HRegion(2289): Compaction status journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:13,185 INFO [Listener at localhost.localdomain/34871] hbase.ResourceChecker(175): after: coprocessor.example.TestWriteHeavyIncrementObserver#test Thread=458 (was 414) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51840 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51700 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-393750459_17 at /127.0.0.1:51852 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:36142 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:36218 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7b95d891-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:36036 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60434 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51824 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51650 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60708 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51552 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:35918 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60782 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51704 [Waiting for operation #13] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:36162 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60494 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60722 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60790 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60630 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60550 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51472 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:35964 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7b95d891-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:35902 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-393750459_17 at /127.0.0.1:51854 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60646 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60886 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51604 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60772 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:35954 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:36164 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60482 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60720 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51514 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60878 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60576 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60672 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7b95d891-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:36076 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60900 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:51618 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:60842 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:35832 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-860901921_17 at /127.0.0.1:35824 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase17:46497-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=930 (was 725) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=321 (was 262) - SystemLoadAverage LEAK? -, ProcessCount=166 (was 166), AvailableMemoryMB=5825 (was 6210) 2023-07-28 11:02:13,188 INFO [Listener at localhost.localdomain/34871] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-28 11:02:13,188 INFO [Listener at localhost.localdomain/34871] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-28 11:02:13,188 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x49aa5cb4 to 127.0.0.1:57744 2023-07-28 11:02:13,188 DEBUG [Listener at localhost.localdomain/34871] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,190 DEBUG [Listener at localhost.localdomain/34871] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-28 11:02:13,190 DEBUG [Listener at localhost.localdomain/34871] util.JVMClusterUtil(257): Found active master hash=772301525, stopped=false 2023-07-28 11:02:13,190 INFO [Listener at localhost.localdomain/34871] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:13,194 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:13,195 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:13,194 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:13,194 INFO [Listener at localhost.localdomain/34871] procedure2.ProcedureExecutor(629): Stopping 2023-07-28 11:02:13,194 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-28 11:02:13,196 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:13,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:13,196 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:13,197 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:13,197 DEBUG [Listener at localhost.localdomain/34871] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1e3870f6 to 127.0.0.1:57744 2023-07-28 11:02:13,197 DEBUG [Listener at localhost.localdomain/34871] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,197 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-28 11:02:13,198 INFO [Listener at localhost.localdomain/34871] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,38067,1690542124169' ***** 2023-07-28 11:02:13,198 INFO [Listener at localhost.localdomain/34871] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-28 11:02:13,198 INFO [Listener at localhost.localdomain/34871] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,34167,1690542124242' ***** 2023-07-28 11:02:13,198 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-28 11:02:13,198 INFO [Listener at localhost.localdomain/34871] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-28 11:02:13,199 INFO [Listener at localhost.localdomain/34871] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,46497,1690542124299' ***** 2023-07-28 11:02:13,199 INFO [Listener at localhost.localdomain/34871] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-28 11:02:13,199 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-28 11:02:13,199 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-28 11:02:13,227 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-28 11:02:13,227 INFO [RS:2;jenkins-hbase17:46497] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@174d5a5f{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-28 11:02:13,227 INFO [RS:0;jenkins-hbase17:38067] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7f9afa35{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-28 11:02:13,227 INFO [RS:1;jenkins-hbase17:34167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@ba70917{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-28 11:02:13,233 INFO [RS:1;jenkins-hbase17:34167] server.AbstractConnector(383): Stopped ServerConnector@71d03e0c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-28 11:02:13,233 INFO [RS:0;jenkins-hbase17:38067] server.AbstractConnector(383): Stopped ServerConnector@39815010{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-28 11:02:13,234 INFO [RS:1;jenkins-hbase17:34167] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-28 11:02:13,233 INFO [RS:2;jenkins-hbase17:46497] server.AbstractConnector(383): Stopped ServerConnector@3aa32b08{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-28 11:02:13,234 INFO [RS:0;jenkins-hbase17:38067] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-28 11:02:13,235 INFO [RS:2;jenkins-hbase17:46497] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-28 11:02:13,235 INFO [RS:1;jenkins-hbase17:34167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@258d96c9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-28 11:02:13,236 INFO [RS:0;jenkins-hbase17:38067] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76bb0a18{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-28 11:02:13,237 INFO [RS:2;jenkins-hbase17:46497] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29c2a492{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-28 11:02:13,237 INFO [RS:1;jenkins-hbase17:34167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@d4b1c99{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,STOPPED} 2023-07-28 11:02:13,241 INFO [RS:2;jenkins-hbase17:46497] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@56ae0893{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,STOPPED} 2023-07-28 11:02:13,241 INFO [RS:1;jenkins-hbase17:34167] regionserver.HeapMemoryManager(220): Stopping 2023-07-28 11:02:13,241 INFO [RS:0;jenkins-hbase17:38067] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@62f48f23{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,STOPPED} 2023-07-28 11:02:13,241 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-28 11:02:13,241 INFO [RS:1;jenkins-hbase17:34167] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-28 11:02:13,241 INFO [RS:1;jenkins-hbase17:34167] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-28 11:02:13,242 INFO [RS:2;jenkins-hbase17:46497] regionserver.HeapMemoryManager(220): Stopping 2023-07-28 11:02:13,242 INFO [RS:0;jenkins-hbase17:38067] regionserver.HeapMemoryManager(220): Stopping 2023-07-28 11:02:13,242 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:13,242 INFO [RS:2;jenkins-hbase17:46497] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-28 11:02:13,242 DEBUG [RS:1;jenkins-hbase17:34167] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x387e7ad2 to 127.0.0.1:57744 2023-07-28 11:02:13,242 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-28 11:02:13,242 INFO [RS:0;jenkins-hbase17:38067] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-28 11:02:13,242 DEBUG [RS:1;jenkins-hbase17:34167] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,242 INFO [RS:2;jenkins-hbase17:46497] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-28 11:02:13,242 INFO [RS:1;jenkins-hbase17:34167] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-28 11:02:13,242 INFO [RS:0;jenkins-hbase17:38067] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-28 11:02:13,243 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(3305): Received CLOSE for 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:13,243 INFO [RS:1;jenkins-hbase17:34167] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-28 11:02:13,243 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(3305): Received CLOSE for ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:13,243 INFO [RS:1;jenkins-hbase17:34167] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-28 11:02:13,243 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-28 11:02:13,243 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:13,243 DEBUG [RS:0;jenkins-hbase17:38067] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x24f517be to 127.0.0.1:57744 2023-07-28 11:02:13,243 DEBUG [RS:0;jenkins-hbase17:38067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,243 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-28 11:02:13,243 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1478): Online Regions={ec21f6123aff02d562a4d2f2eafb4289=hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289.} 2023-07-28 11:02:13,244 DEBUG [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1504): Waiting on ec21f6123aff02d562a4d2f2eafb4289 2023-07-28 11:02:13,244 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-28 11:02:13,244 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-28 11:02:13,244 DEBUG [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-28 11:02:13,248 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:13,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing ec21f6123aff02d562a4d2f2eafb4289, disabling compactions & flushes 2023-07-28 11:02:13,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-28 11:02:13,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-28 11:02:13,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-28 11:02:13,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-28 11:02:13,249 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-28 11:02:13,249 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:13,249 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.42 KB heapSize=4.93 KB 2023-07-28 11:02:13,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1604): Closing 4b73e169ebfeed69423385de141758d8, disabling compactions & flushes 2023-07-28 11:02:13,249 DEBUG [RS:2;jenkins-hbase17:46497] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5e1d50db to 127.0.0.1:57744 2023-07-28 11:02:13,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1626): Closing region TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:13,249 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:13,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:13,250 DEBUG [RS:2;jenkins-hbase17:46497] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,250 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-28 11:02:13,250 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1478): Online Regions={4b73e169ebfeed69423385de141758d8=TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.} 2023-07-28 11:02:13,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. after waiting 0 ms 2023-07-28 11:02:13,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:13,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. after waiting 0 ms 2023-07-28 11:02:13,250 DEBUG [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1504): Waiting on 4b73e169ebfeed69423385de141758d8 2023-07-28 11:02:13,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:13,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2745): Flushing ec21f6123aff02d562a4d2f2eafb4289 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-28 11:02:13,284 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-28 11:02:13,284 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-28 11:02:13,288 INFO [regionserver/jenkins-hbase17:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-28 11:02:13,292 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/c957459edbd24e94aa567db8ced37441, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/91dde01646cc437fa459f60c7f3ca1e6, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d14c83f2fe4648b2a52d535d61d2199b, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5274925c82274706809e1dc7ae12bb0e, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4e6027aa15b74a2bbadd430a49095946, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7f6c5a8338204fa38160c57c14af0e30, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a4ef92a70c184f5c8d67c7210a3abd8c, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/41b25809edda407db11625a6e5fa1364, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/0ec2c0c398b4462d8821be57303e7685, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/31be6da75fe449299320ad6acddc2f10, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/bfe6c9ff0ea94075bb6224bcf28cc58e, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b578cdf5cb4d48c2bf4a5dc904abe451, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4859a499b12a406a9ff9df692b875ec7, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/14ab693eef4648d4ba0a7b921533fe57, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5298d72a1f054b4d877caa47e957f329, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/37a0800493ae4bc2b4430ecfdcaeb13e, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/db9abf087fcd4a6c805b3aed6230ae5f, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6c72cb89706846c982dec393c2c33a51, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6abfbfaef88a47f48a4baf7bc57b5f90, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/605416f2874d494282b8924c271a3bbe, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d4455de3b6af48c7af2fdcfe63373353, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7b3d99560c11488982bf904a7a95bd8b, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/054976795ebb4ca5b677aa6b1ee43f59, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/42dd36caa83c4020be93f18ca9a30af3, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/732edf8824424e18b99cba0d06742e48, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/27b3989d39f648ae815a2aca56f89417, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b435738d745d4083ba435e7cb2f47ce1, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/e22352dc44914446a27217ad925fb94c, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/13c2e8a383bf4bbbaab2bdd8b798d47c, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/3f2bf805c1a5433cb274f215fa9b1a39, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a5a95c956c4c4a9683f6e2b392d16866, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/eb86d957f9114c7598d6ceeb7a368321, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/dfbf8c20954d4796837a2795b1ea0b41, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/98033c0be4184394a53c9d1115c1996a, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/988399e14e4e4aad83b05b11c0f4ac17, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4a398fcefa814816a389a50457f8a290, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1d127e517971484191d2f362eb25a0a5, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/8e495bbc144d40d08bd67af2c7b95678, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/44d767a51d6b4c36920d8a32ac5a5f34, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1a029448edaa4d5aa968ae4f1e2bdbf6, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1e316082e8584192a866b180a8f9fd92] to archive 2023-07-28 11:02:13,293 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-28 11:02:13,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/.tmp/info/47ec20cd0af74fe39711c11748af880b 2023-07-28 11:02:13,308 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/be8dcbb4ca9b4e42b0bb397e3f9765b3 2023-07-28 11:02:13,315 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.25 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/.tmp/info/44a692ff99eb430c975e0fbb477eb3ac 2023-07-28 11:02:13,316 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/c957459edbd24e94aa567db8ced37441 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/c957459edbd24e94aa567db8ced37441 2023-07-28 11:02:13,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/.tmp/info/47ec20cd0af74fe39711c11748af880b as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/info/47ec20cd0af74fe39711c11748af880b 2023-07-28 11:02:13,321 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/91dde01646cc437fa459f60c7f3ca1e6 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/91dde01646cc437fa459f60c7f3ca1e6 2023-07-28 11:02:13,327 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d14c83f2fe4648b2a52d535d61d2199b to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d14c83f2fe4648b2a52d535d61d2199b 2023-07-28 11:02:13,330 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5274925c82274706809e1dc7ae12bb0e to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5274925c82274706809e1dc7ae12bb0e 2023-07-28 11:02:13,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/info/47ec20cd0af74fe39711c11748af880b, entries=2, sequenceid=6, filesize=4.8 K 2023-07-28 11:02:13,335 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4e6027aa15b74a2bbadd430a49095946 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4e6027aa15b74a2bbadd430a49095946 2023-07-28 11:02:13,336 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for ec21f6123aff02d562a4d2f2eafb4289 in 86ms, sequenceid=6, compaction requested=false 2023-07-28 11:02:13,342 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7f6c5a8338204fa38160c57c14af0e30 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7f6c5a8338204fa38160c57c14af0e30 2023-07-28 11:02:13,345 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a4ef92a70c184f5c8d67c7210a3abd8c to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a4ef92a70c184f5c8d67c7210a3abd8c 2023-07-28 11:02:13,347 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/41b25809edda407db11625a6e5fa1364 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/41b25809edda407db11625a6e5fa1364 2023-07-28 11:02:13,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/namespace/ec21f6123aff02d562a4d2f2eafb4289/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-28 11:02:13,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:13,351 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/0ec2c0c398b4462d8821be57303e7685 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/0ec2c0c398b4462d8821be57303e7685 2023-07-28 11:02:13,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for ec21f6123aff02d562a4d2f2eafb4289: 2023-07-28 11:02:13,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690542126432.ec21f6123aff02d562a4d2f2eafb4289. 2023-07-28 11:02:13,356 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/31be6da75fe449299320ad6acddc2f10 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/31be6da75fe449299320ad6acddc2f10 2023-07-28 11:02:13,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=170 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/.tmp/table/7942748f70a746f595af83b28d112c2b 2023-07-28 11:02:13,358 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/bfe6c9ff0ea94075bb6224bcf28cc58e to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/bfe6c9ff0ea94075bb6224bcf28cc58e 2023-07-28 11:02:13,360 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b578cdf5cb4d48c2bf4a5dc904abe451 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b578cdf5cb4d48c2bf4a5dc904abe451 2023-07-28 11:02:13,362 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4859a499b12a406a9ff9df692b875ec7 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4859a499b12a406a9ff9df692b875ec7 2023-07-28 11:02:13,364 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/.tmp/info/44a692ff99eb430c975e0fbb477eb3ac as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/info/44a692ff99eb430c975e0fbb477eb3ac 2023-07-28 11:02:13,364 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/14ab693eef4648d4ba0a7b921533fe57 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/14ab693eef4648d4ba0a7b921533fe57 2023-07-28 11:02:13,367 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5298d72a1f054b4d877caa47e957f329 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/5298d72a1f054b4d877caa47e957f329 2023-07-28 11:02:13,369 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/37a0800493ae4bc2b4430ecfdcaeb13e to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/37a0800493ae4bc2b4430ecfdcaeb13e 2023-07-28 11:02:13,371 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/db9abf087fcd4a6c805b3aed6230ae5f to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/db9abf087fcd4a6c805b3aed6230ae5f 2023-07-28 11:02:13,372 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/info/44a692ff99eb430c975e0fbb477eb3ac, entries=20, sequenceid=14, filesize=6.9 K 2023-07-28 11:02:13,373 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/.tmp/table/7942748f70a746f595af83b28d112c2b as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/table/7942748f70a746f595af83b28d112c2b 2023-07-28 11:02:13,373 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6c72cb89706846c982dec393c2c33a51 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6c72cb89706846c982dec393c2c33a51 2023-07-28 11:02:13,375 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6abfbfaef88a47f48a4baf7bc57b5f90 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/6abfbfaef88a47f48a4baf7bc57b5f90 2023-07-28 11:02:13,376 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/605416f2874d494282b8924c271a3bbe to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/605416f2874d494282b8924c271a3bbe 2023-07-28 11:02:13,378 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d4455de3b6af48c7af2fdcfe63373353 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/d4455de3b6af48c7af2fdcfe63373353 2023-07-28 11:02:13,379 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/table/7942748f70a746f595af83b28d112c2b, entries=4, sequenceid=14, filesize=4.7 K 2023-07-28 11:02:13,380 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.42 KB/2473, heapSize ~4.65 KB/4760, currentSize=0 B/0 for 1588230740 in 131ms, sequenceid=14, compaction requested=false 2023-07-28 11:02:13,382 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7b3d99560c11488982bf904a7a95bd8b to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/7b3d99560c11488982bf904a7a95bd8b 2023-07-28 11:02:13,388 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/054976795ebb4ca5b677aa6b1ee43f59 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/054976795ebb4ca5b677aa6b1ee43f59 2023-07-28 11:02:13,390 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/42dd36caa83c4020be93f18ca9a30af3 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/42dd36caa83c4020be93f18ca9a30af3 2023-07-28 11:02:13,390 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-28 11:02:13,391 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-28 11:02:13,391 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/732edf8824424e18b99cba0d06742e48 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/732edf8824424e18b99cba0d06742e48 2023-07-28 11:02:13,392 INFO [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-28 11:02:13,393 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-28 11:02:13,393 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-28 11:02:13,394 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/27b3989d39f648ae815a2aca56f89417 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/27b3989d39f648ae815a2aca56f89417 2023-07-28 11:02:13,396 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b435738d745d4083ba435e7cb2f47ce1 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/b435738d745d4083ba435e7cb2f47ce1 2023-07-28 11:02:13,397 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/e22352dc44914446a27217ad925fb94c to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/e22352dc44914446a27217ad925fb94c 2023-07-28 11:02:13,399 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/13c2e8a383bf4bbbaab2bdd8b798d47c to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/13c2e8a383bf4bbbaab2bdd8b798d47c 2023-07-28 11:02:13,401 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/3f2bf805c1a5433cb274f215fa9b1a39 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/3f2bf805c1a5433cb274f215fa9b1a39 2023-07-28 11:02:13,402 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a5a95c956c4c4a9683f6e2b392d16866 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/a5a95c956c4c4a9683f6e2b392d16866 2023-07-28 11:02:13,404 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/eb86d957f9114c7598d6ceeb7a368321 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/eb86d957f9114c7598d6ceeb7a368321 2023-07-28 11:02:13,406 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/dfbf8c20954d4796837a2795b1ea0b41 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/dfbf8c20954d4796837a2795b1ea0b41 2023-07-28 11:02:13,408 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/98033c0be4184394a53c9d1115c1996a to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/98033c0be4184394a53c9d1115c1996a 2023-07-28 11:02:13,409 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/988399e14e4e4aad83b05b11c0f4ac17 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/988399e14e4e4aad83b05b11c0f4ac17 2023-07-28 11:02:13,411 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4a398fcefa814816a389a50457f8a290 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/4a398fcefa814816a389a50457f8a290 2023-07-28 11:02:13,412 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1d127e517971484191d2f362eb25a0a5 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1d127e517971484191d2f362eb25a0a5 2023-07-28 11:02:13,413 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/8e495bbc144d40d08bd67af2c7b95678 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/8e495bbc144d40d08bd67af2c7b95678 2023-07-28 11:02:13,414 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/44d767a51d6b4c36920d8a32ac5a5f34 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/44d767a51d6b4c36920d8a32ac5a5f34 2023-07-28 11:02:13,416 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1a029448edaa4d5aa968ae4f1e2bdbf6 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1a029448edaa4d5aa968ae4f1e2bdbf6 2023-07-28 11:02:13,417 DEBUG [StoreCloser-TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1e316082e8584192a866b180a8f9fd92 to hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/archive/data/default/TestCP/4b73e169ebfeed69423385de141758d8/cf/1e316082e8584192a866b180a8f9fd92 2023-07-28 11:02:13,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/data/default/TestCP/4b73e169ebfeed69423385de141758d8/recovered.edits/10105.seqid, newMaxSeqId=10105, maxSeqId=1 2023-07-28 11:02:13,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.example.WriteHeavyIncrementObserver 2023-07-28 11:02:13,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1838): Closed TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:13,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] regionserver.HRegion(1558): Region close journal for 4b73e169ebfeed69423385de141758d8: 2023-07-28 11:02:13,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase17:0-0] handler.CloseRegionHandler(117): Closed TestCP,,1690542127229.4b73e169ebfeed69423385de141758d8. 2023-07-28 11:02:13,444 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,38067,1690542124169; all regions closed. 2023-07-28 11:02:13,444 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,34167,1690542124242; all regions closed. 2023-07-28 11:02:13,451 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,46497,1690542124299; all regions closed. 2023-07-28 11:02:13,460 DEBUG [RS:1;jenkins-hbase17:34167] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs 2023-07-28 11:02:13,460 INFO [RS:1;jenkins-hbase17:34167] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34167%2C1690542124242.meta:.meta(num 1690542126151) 2023-07-28 11:02:13,460 DEBUG [RS:0;jenkins-hbase17:38067] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs 2023-07-28 11:02:13,460 INFO [RS:0;jenkins-hbase17:38067] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C38067%2C1690542124169:(num 1690542126013) 2023-07-28 11:02:13,460 DEBUG [RS:0;jenkins-hbase17:38067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,460 INFO [RS:0;jenkins-hbase17:38067] regionserver.LeaseManager(133): Closed leases 2023-07-28 11:02:13,460 INFO [RS:0;jenkins-hbase17:38067] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-28 11:02:13,461 INFO [RS:0;jenkins-hbase17:38067] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-28 11:02:13,461 INFO [RS:0;jenkins-hbase17:38067] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-28 11:02:13,461 INFO [RS:0;jenkins-hbase17:38067] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-28 11:02:13,462 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-28 11:02:13,463 DEBUG [RS:2;jenkins-hbase17:46497] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs 2023-07-28 11:02:13,463 INFO [RS:2;jenkins-hbase17:46497] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C46497%2C1690542124299:(num 1690542126005) 2023-07-28 11:02:13,463 INFO [RS:0;jenkins-hbase17:38067] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:38067 2023-07-28 11:02:13,463 DEBUG [RS:2;jenkins-hbase17:46497] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,465 INFO [RS:2;jenkins-hbase17:46497] regionserver.LeaseManager(133): Closed leases 2023-07-28 11:02:13,466 INFO [RS:2;jenkins-hbase17:46497] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-28 11:02:13,466 INFO [RS:2;jenkins-hbase17:46497] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-28 11:02:13,467 INFO [RS:2;jenkins-hbase17:46497] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-28 11:02:13,466 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-28 11:02:13,467 INFO [RS:2;jenkins-hbase17:46497] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-28 11:02:13,469 INFO [RS:2;jenkins-hbase17:46497] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:46497 2023-07-28 11:02:13,474 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:13,474 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-28 11:02:13,474 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-28 11:02:13,474 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:13,474 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-28 11:02:13,474 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,46497,1690542124299 2023-07-28 11:02:13,474 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-28 11:02:13,477 DEBUG [RS:1;jenkins-hbase17:34167] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/oldWALs 2023-07-28 11:02:13,478 INFO [RS:1;jenkins-hbase17:34167] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase17.apache.org%2C34167%2C1690542124242:(num 1690542126012) 2023-07-28 11:02:13,478 DEBUG [RS:1;jenkins-hbase17:34167] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,478 INFO [RS:1;jenkins-hbase17:34167] regionserver.LeaseManager(133): Closed leases 2023-07-28 11:02:13,479 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:13,479 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,38067,1690542124169] 2023-07-28 11:02:13,479 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:13,479 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,38067,1690542124169; numProcessing=1 2023-07-28 11:02:13,479 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,38067,1690542124169 2023-07-28 11:02:13,480 INFO [RS:1;jenkins-hbase17:34167] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase17:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-28 11:02:13,480 INFO [regionserver/jenkins-hbase17:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-28 11:02:13,481 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,38067,1690542124169 already deleted, retry=false 2023-07-28 11:02:13,481 INFO [RS:1;jenkins-hbase17:34167] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:34167 2023-07-28 11:02:13,481 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,38067,1690542124169 expired; onlineServers=2 2023-07-28 11:02:13,481 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,46497,1690542124299] 2023-07-28 11:02:13,482 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,46497,1690542124299; numProcessing=2 2023-07-28 11:02:13,580 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,580 INFO [RS:2;jenkins-hbase17:46497] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,46497,1690542124299; zookeeper connection closed. 2023-07-28 11:02:13,580 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:46497-0x101ab971e4e0003, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,580 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@292bad7b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@292bad7b 2023-07-28 11:02:13,582 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-28 11:02:13,582 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,46497,1690542124299 already deleted, retry=false 2023-07-28 11:02:13,582 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase17.apache.org,34167,1690542124242 2023-07-28 11:02:13,582 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,46497,1690542124299 expired; onlineServers=1 2023-07-28 11:02:13,680 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,680 INFO [RS:0;jenkins-hbase17:38067] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,38067,1690542124169; zookeeper connection closed. 2023-07-28 11:02:13,680 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:38067-0x101ab971e4e0001, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,681 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3e22533] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3e22533 2023-07-28 11:02:13,681 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase17.apache.org,34167,1690542124242] 2023-07-28 11:02:13,681 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase17.apache.org,34167,1690542124242; numProcessing=3 2023-07-28 11:02:13,682 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase17.apache.org,34167,1690542124242 already deleted, retry=false 2023-07-28 11:02:13,683 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase17.apache.org,34167,1690542124242 expired; onlineServers=0 2023-07-28 11:02:13,683 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase17.apache.org,42003,1690542122606' ***** 2023-07-28 11:02:13,683 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-28 11:02:13,683 DEBUG [M:0;jenkins-hbase17:42003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f3568ee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase17.apache.org/136.243.18.41:0 2023-07-28 11:02:13,683 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-28 11:02:13,688 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-28 11:02:13,688 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-28 11:02:13,688 INFO [M:0;jenkins-hbase17:42003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@fd66f76{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-28 11:02:13,689 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-28 11:02:13,689 INFO [M:0;jenkins-hbase17:42003] server.AbstractConnector(383): Stopped ServerConnector@6079f759{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-28 11:02:13,689 INFO [M:0;jenkins-hbase17:42003] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-28 11:02:13,691 INFO [M:0;jenkins-hbase17:42003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@373e4047{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-28 11:02:13,691 INFO [M:0;jenkins-hbase17:42003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@479e369e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/hadoop.log.dir/,STOPPED} 2023-07-28 11:02:13,692 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegionServer(1144): stopping server jenkins-hbase17.apache.org,42003,1690542122606 2023-07-28 11:02:13,692 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegionServer(1170): stopping server jenkins-hbase17.apache.org,42003,1690542122606; all regions closed. 2023-07-28 11:02:13,692 DEBUG [M:0;jenkins-hbase17:42003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-28 11:02:13,692 INFO [M:0;jenkins-hbase17:42003] master.HMaster(1491): Stopping master jetty server 2023-07-28 11:02:13,693 INFO [M:0;jenkins-hbase17:42003] server.AbstractConnector(383): Stopped ServerConnector@496cf30a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-28 11:02:13,693 DEBUG [M:0;jenkins-hbase17:42003] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-28 11:02:13,693 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-28 11:02:13,693 DEBUG [M:0;jenkins-hbase17:42003] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-28 11:02:13,693 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1690542125581] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.small.0-1690542125581,5,FailOnTimeoutGroup] 2023-07-28 11:02:13,693 DEBUG [master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1690542125580] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase17:0:becomeActiveMaster-HFileCleaner.large.0-1690542125580,5,FailOnTimeoutGroup] 2023-07-28 11:02:13,693 INFO [M:0;jenkins-hbase17:42003] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-28 11:02:13,694 INFO [M:0;jenkins-hbase17:42003] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-28 11:02:13,694 INFO [M:0;jenkins-hbase17:42003] hbase.ChoreService(369): Chore service for: master/jenkins-hbase17:0 had [] on shutdown 2023-07-28 11:02:13,694 DEBUG [M:0;jenkins-hbase17:42003] master.HMaster(1512): Stopping service threads 2023-07-28 11:02:13,694 INFO [M:0;jenkins-hbase17:42003] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-28 11:02:13,694 ERROR [M:0;jenkins-hbase17:42003] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] 2023-07-28 11:02:13,695 INFO [M:0;jenkins-hbase17:42003] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-28 11:02:13,695 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-28 11:02:13,695 DEBUG [M:0;jenkins-hbase17:42003] zookeeper.ZKUtil(398): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-28 11:02:13,695 WARN [M:0;jenkins-hbase17:42003] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-28 11:02:13,695 INFO [M:0;jenkins-hbase17:42003] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-28 11:02:13,695 INFO [M:0;jenkins-hbase17:42003] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-28 11:02:13,696 DEBUG [M:0;jenkins-hbase17:42003] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-28 11:02:13,696 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:13,696 DEBUG [M:0;jenkins-hbase17:42003] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:13,696 DEBUG [M:0;jenkins-hbase17:42003] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-28 11:02:13,696 DEBUG [M:0;jenkins-hbase17:42003] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:13,696 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=37.98 KB heapSize=45.63 KB 2023-07-28 11:02:13,712 INFO [M:0;jenkins-hbase17:42003] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.98 KB at sequenceid=91 (bloomFilter=true), to=hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8d4303871a6142238b58cb2f8408ee55 2023-07-28 11:02:13,717 DEBUG [M:0;jenkins-hbase17:42003] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8d4303871a6142238b58cb2f8408ee55 as hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8d4303871a6142238b58cb2f8408ee55 2023-07-28 11:02:13,726 INFO [M:0;jenkins-hbase17:42003] regionserver.HStore(1080): Added hdfs://localhost.localdomain:39247/user/jenkins/test-data/61395470-7669-fb57-f381-fc9100d4a02b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8d4303871a6142238b58cb2f8408ee55, entries=11, sequenceid=91, filesize=7.1 K 2023-07-28 11:02:13,727 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegion(2948): Finished flush of dataSize ~37.98 KB/38894, heapSize ~45.61 KB/46704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=91, compaction requested=false 2023-07-28 11:02:13,728 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-28 11:02:13,728 DEBUG [M:0;jenkins-hbase17:42003] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-28 11:02:13,732 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-28 11:02:13,732 INFO [M:0;jenkins-hbase17:42003] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-28 11:02:13,732 INFO [M:0;jenkins-hbase17:42003] ipc.NettyRpcServer(158): Stopping server on /136.243.18.41:42003 2023-07-28 11:02:13,733 DEBUG [M:0;jenkins-hbase17:42003] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase17.apache.org,42003,1690542122606 already deleted, retry=false 2023-07-28 11:02:13,794 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,794 INFO [RS:1;jenkins-hbase17:34167] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,34167,1690542124242; zookeeper connection closed. 2023-07-28 11:02:13,794 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): regionserver:34167-0x101ab971e4e0002, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,795 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@43799cc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@43799cc 2023-07-28 11:02:13,795 INFO [Listener at localhost.localdomain/34871] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-28 11:02:13,894 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,894 DEBUG [Listener at localhost.localdomain/34871-EventThread] zookeeper.ZKWatcher(600): master:42003-0x101ab971e4e0000, quorum=127.0.0.1:57744, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-28 11:02:13,894 INFO [M:0;jenkins-hbase17:42003] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase17.apache.org,42003,1690542122606; zookeeper connection closed. 2023-07-28 11:02:13,897 WARN [Listener at localhost.localdomain/34871] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-28 11:02:13,917 INFO [Listener at localhost.localdomain/34871] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-28 11:02:14,028 WARN [BP-1351019224-136.243.18.41-1690542119108 heartbeating to localhost.localdomain/127.0.0.1:39247] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-28 11:02:14,029 WARN [BP-1351019224-136.243.18.41-1690542119108 heartbeating to localhost.localdomain/127.0.0.1:39247] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1351019224-136.243.18.41-1690542119108 (Datanode Uuid 7da0c9f3-dcb5-4341-afdc-5aa62f3a2895) service to localhost.localdomain/127.0.0.1:39247 2023-07-28 11:02:14,030 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/dfs/data/data5/current/BP-1351019224-136.243.18.41-1690542119108] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-28 11:02:14,031 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/dfs/data/data6/current/BP-1351019224-136.243.18.41-1690542119108] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-28 11:02:14,032 WARN [Listener at localhost.localdomain/34871] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-28 11:02:14,034 INFO [Listener at localhost.localdomain/34871] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-28 11:02:14,137 WARN [BP-1351019224-136.243.18.41-1690542119108 heartbeating to localhost.localdomain/127.0.0.1:39247] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-28 11:02:14,137 WARN [BP-1351019224-136.243.18.41-1690542119108 heartbeating to localhost.localdomain/127.0.0.1:39247] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1351019224-136.243.18.41-1690542119108 (Datanode Uuid 5d11317d-e261-471a-bfc8-2bca902ce57e) service to localhost.localdomain/127.0.0.1:39247 2023-07-28 11:02:14,138 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/dfs/data/data3/current/BP-1351019224-136.243.18.41-1690542119108] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-28 11:02:14,138 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/dfs/data/data4/current/BP-1351019224-136.243.18.41-1690542119108] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-28 11:02:14,140 WARN [Listener at localhost.localdomain/34871] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-28 11:02:14,144 INFO [Listener at localhost.localdomain/34871] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-28 11:02:14,147 WARN [BP-1351019224-136.243.18.41-1690542119108 heartbeating to localhost.localdomain/127.0.0.1:39247] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-28 11:02:14,147 WARN [BP-1351019224-136.243.18.41-1690542119108 heartbeating to localhost.localdomain/127.0.0.1:39247] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1351019224-136.243.18.41-1690542119108 (Datanode Uuid 2a945093-d982-4545-9f5b-1973ef5d6e31) service to localhost.localdomain/127.0.0.1:39247 2023-07-28 11:02:14,148 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/dfs/data/data1/current/BP-1351019224-136.243.18.41-1690542119108] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-28 11:02:14,148 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-examples/target/test-data/3a0174d0-b9c7-07c8-9f76-818fb1a3ca1a/cluster_435c5486-8dbd-6b85-9873-6bfa86d696ae/dfs/data/data2/current/BP-1351019224-136.243.18.41-1690542119108] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-28 11:02:14,180 INFO [Listener at localhost.localdomain/34871] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-28 11:02:14,304 INFO [Listener at localhost.localdomain/34871] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-28 11:02:14,400 INFO [Listener at localhost.localdomain/34871] hbase.HBaseTestingUtility(1293): Minicluster is down